text
stringlengths 8.19k
1.23M
| summary
stringlengths 342
12.7k
|
---|---|
You are an expert at summarizing long articles. Proceed to summarize the following text:
In order for students attending a school to receive Title IV funds, a school must be: 1. licensed or otherwise legally authorized to provide higher education in the state in which it is located, 2. accredited by an agency recognized for that purpose by the Secretary 3. deemed eligible and certified to participate in federal student aid programs by Education. Under the Higher Education Act, Education does not determine the quality of higher education institutions or their programs; rather, it relies on recognized accrediting agencies to do so. As part of its role in the administration of federal student aid programs, Education determines which institutions of higher education are eligible to participate in Title IV programs. Education is responsible for overseeing school compliance with Title IV laws and regulations and ensuring that only eligible students receive federal student aid. As part of its compliance monitoring, Education relies on department employees and independent auditors of schools to conduct program reviews and audits of schools. Institutions that participate in Title IV programs must comply with a range of requirements, including consumer disclosure requirements, which include information schools must make available to third parties, as well as reporting requirements, which include information schools must provide to Education. Congress and the President enact the statutes that create federal programs; these statutes may also authorize or direct a federal agency to develop and issue regulations to implement them. Both the authorizing statute and the implementing regulations may contain requirements that recipients must comply with in order to receive federal funds. The statute itself may impose specific requirements; alternatively, it may set general parameters and the implementing agency may then issue regulations further clarifying the requirements. Federal agencies may evaluate and modify their regulatory requirements, but they lack the authority to modify requirements imposed by statute. In addition, when issuing rules related to programs authorized under Title IV, Education is generally required by the HEA to use negotiated rulemaking, a process that directly involves stakeholders in drafting proposed regulations. Once the department determines that a rulemaking is necessary, it publishes a notice in the Federal Register, announcing its intent to form a negotiated rulemaking committee, and holds public hearings to seek input on the issues to be negotiated. Stakeholders, who are nominated by the public and selected by Education to serve as negotiators, may include schools and their professional associations, as well as student representatives and other interested parties. A representative from Education and stakeholders work together on a committee that attempts to reach consensus, which Education defines as unanimous agreement on the entire proposed regulatory language. If consensus is reached, Education will generally publish the agreed-upon language as its proposed rule. If consensus is not reached, Education is not bound by the results of the negotiating committee when drafting the proposed rule. According to proponents, the negotiated rulemaking process increases the flow of information between the department and those who must implement requirements. Once a proposed rule is published, Education continues the rulemaking process by providing the public an opportunity to comment before issuing the final rule. The Paperwork Reduction Act (PRA) requires federal agencies to assess and seek public comment on certain kinds of burden, in accordance with its purpose of minimizing the paperwork burden and maximizing the utility of information collected by the federal government. Under the PRA, agencies are generally required to seek public comment and obtain Office of Management and Budget (OMB) approval before collecting information from the public, including schools. Agencies seek OMB approval by submitting information collection requests (ICR), which include among other things, a description of the planned collection efforts, as well as estimates of burden in terms of time, effort, or financial resources that respondents will expend to gather and submit the information. Agencies are also required to solicit public comment on proposed information collections by publishing notices in the Federal Register. If a proposed information collection is part of a proposed rulemaking, the agency may include the PRA notice for the information collection in the Notice of Proposed Rulemaking for that rule. The PRA authorizes OMB to approve information collections for up to 3 years. Agencies seeking an extension of OMB approval must re-submit an ICR using similar procedures, including soliciting public comment on the continued need for and burden imposed by the information collection. Over the last two decades, there have been several efforts to examine the federal regulatory burden faced by schools (see table 1). While intending to make regulations more efficient and less burdensome, several of these efforts also acknowledge that regulation provides benefits to government and the public at large. The specific results of initiatives varied, as described below. For example, Executive Order 13563, which was issued in 2011, requires agencies to, among other things, develop plans to periodically review their existing significant regulations and determine whether these regulations should be modified, streamlined, expanded, or repealed to make the agencies’ regulatory programs more effective or less burdensome. Consistent with the order’s emphasis on public participation in the rulemaking process, OMB guidance encourages agencies to obtain public input on their plans. The specific results of initiatives varied, as described below. Although the 18 experts we interviewed offered varied opinions on which Title IV requirements are the most burdensome, 16 said that federal requirements impose burden on postsecondary schools. While no single requirement was cited as most burdensome by a majority of experts, 11 cited various consumer disclosures schools must provide or make available to the public, students, and staff (see table 2). Among other things, these disclosure requirements include providing certain information about schools, such as student enrollment, graduation rates, and cost of attendance. The most frequently mentioned consumer disclosure requirement—cited by 5 experts as burdensome—was the “Clery Act” campus security and crime statistics disclosure requirement. Two experts noted the burden associated with reporting security data, some of which may overlap with federal, state, and local law enforcement agencies. Beyond consumer disclosures, 4 experts stated that schools are burdened by requirements related to the return of unearned Title IV funds to the federal government when a student receiving financial aid withdraws from school. According to 2 experts, schools find it particularly difficult both to calculate the precise amount of funds that should be returned and to determine the date on which a student withdrew. Finally, 6 experts we interviewed stated that, in their view, it is the accumulation of burden imposed by multiple requirements—rather than burden derived from a single requirement—that accounts for the burden felt by postsecondary schools. Three stated that requirements are incrementally added, resulting in increased burden over time. Experts also described some of the benefits associated with Title IV requirements. For example, one expert stated that requiring schools to disclose information to students to help them understand that they have a responsibility to repay their loans could be beneficial. Another expert noted that consumer disclosures allow students to identify programs relevant to their interests and that they can afford. School officials who participated in our discussion groups told us that Title IV requirements impose burden in a number of ways, as shown in table 3. Participants in all eight groups discussed various requirements that they believe create burden for schools because they are, among other things, too costly and complicated. For example, participants in four groups said the requirement that schools receiving Title IV funds post a net price calculator on their websites—an application that provides consumers with estimates of the costs of attending a school—has proven costly or complicated, noting challenges such as those associated with the web application, obtaining the necessary data, or providing information that may not fit the schools’ circumstances. School officials from six discussion groups also noted that complying with requirements related to the Return of Title IV Funds can be costly because of the time required to calculate how much money should be returned to the federal government (see Appendix III for information on selected comments on specific federal requirements school officials described as burdensome). Participants in six of eight discussion groups said that consumer disclosures were complicated, and participants in seven groups said that Return of Title IV Funds requirements were complicated. For example, participants in one discussion group stated that consumer disclosures are complicated because reporting periods can vary for different types of information. Another explained that the complexity of consumer disclosures is a burden to staff because the information can be difficult to explain to current or prospective students. Also, participants in two groups stated that the complexity of consumer disclosures makes it difficult for schools to ensure compliance with the requirements. Likewise, participants noted that calculating the amount of Title IV funds that should be returned can be complicated because of the difficulty of determining the number of days a student attended class as well as the correct number of days in the payment period or period of enrollment for courses that do not span the entire period. Participants in three discussion groups found the complexity of Return of Title IV requirements made it difficult to complete returns within the required time frame. In addition, participants from four groups noted the complexity increases the risk of audit findings, which puts pressure on staff. Discussion group participants identified other types of concerns that apply primarily to consumer disclosures. For example, participants in two groups said that it is burdensome for schools to make public some disclosures, such as graduates’ job placement data, because they cannot easily be compared across schools, thereby defeating the purpose of the information. Like six of the experts we interviewed, participants in six discussion groups noted that burden results from the accumulation of many requirements rather than a few difficult requirements. Two participants said that when new requirements are added, generally, none are taken away. Similarly, two other participants commented that the amount of information schools are required to report grows over time. Another commented that it is difficult to get multiple departments within a school to coordinate in order to comply with the range of requirements to which schools are subject under Title IV. Other federal requirements, in addition to those related to Title IV, may also apply to postsecondary schools (see Appendix IV for selected examples). School officials also described some benefits of Title IV requirements. Participants in three discussion groups pointed out that some consumer information can be used to help applicants choose the right school. Other participants commented that consumer disclosures encourage transparency. For example, participants in two groups said the information schools are required to disclose regarding textbooks helps students compare prices and consider the total cost of books. Regarding Return of Title IV Funds, participants in three discussion groups stated that the process helps restore funds to the federal government that can be redirected to other students. Education seeks feedback on burden through formal channels such as publishing notices seeking comments on its burden estimates for proposed information collections, its retrospective analysis plan, and negotiated rulemaking. As shown in table 4, the department publishes notices in the Federal Register, on its website, and through a listserv to make the public aware of opportunities to provide feedback on burden.Department officials also said they receive some feedback from school officials through informal channels such as training sessions and open forums at conferences. Although Education has published notices seeking feedback on burden, officials said the department has received few comments in response to its solicitations. For example, Education said it received no comments in response to its request for public comment on burden estimates included in its 2010 “Program Integrity” Notices of Proposed Rulemaking, which proposed multiple regulatory changes with increased burden estimates. In addition, Education officials said some of the comments they receive about burden estimates are too general to make modifications in response to them. We focused on ICRs submitted by two Education offices that manage postsecondary issues: the Office of Federal Student Aid and the Office of Postsecondary Education. We selected the time period because it coincides with the 2006 launch of the OMB and General Services Administration web portal used by agencies to electronically post comments and other documents related to information collections to reginfo.gov; includes the enactment of the Higher Education Opportunity Act in 2008, which resulted in regulatory changes; and includes ICRs recently submitted. See Appendix I for additional information on the types of ICRs included in our review. shows that fewer than one-fourth (65 of 353) received public comments, of which 25 included comments that addressed burden faced by schools (see fig 1). For example, 2 ICRs received input on the difficulties of providing data requested by the department. We identified 40 ICRs that did not receive comments on burden faced by schools; several ICRs, for example, received input on simplifying the language of student loan– related forms. Further, in a review of the 30 comments received by the department in response to its proposed retrospective analysis plan, we identified 11 comments related to higher education, of which 9 mentioned regulatory burden. For example, one commenter described difficulties that smaller schools may have meeting reporting requirements. Negotiated rulemaking presents another opportunity for schools and others to provide feedback on burden. Six experts and participants in six discussion groups thought aspects of negotiated rulemaking are beneficial overall. However, some experts and discussion group participants said certain aspects of the process may limit the impact of feedback on burden. Specifically, four experts and participants in six of our discussion groups expressed concern that when the negotiated rulemaking process does not achieve consensus, the department may draft regulations unencumbered by negotiators’ input, which may have addressed burden. According to those we spoke with, consensus may not be achieved, for example, if Education includes controversial topics over which there is likely to be disagreement or declines to agree with other negotiators. Education officials responded that their goal during negotiated rulemakings is to draft the best language for the regulation. Further, department officials said that negotiators can collectively agree to make changes to the agenda, unanimous consensus provides negotiators with an incentive to work together, and that the department cannot avoid negotiated rulemaking on controversial topics. Education officials said that when consensus is not achieved, the department rarely deviates from any language agreed upon by negotiators. Notwithstanding the benefits of Title IV requirements, school officials believe that the burden created by federal requirements diverts time and resources from their primary mission of educating students. Our findings—as well as those of previous studies—indicate that the burden reported by school officials and experts not only stems from a single or a few requirements, but also from the accumulation of many requirements. While Education has solicited feedback on the burdens associated with federal requirements, our findings show that stakeholders do not always provide this feedback. As a result, stakeholders may be missing an opportunity to help reduce the burden of federal requirements on schools. We provided a draft of this report to Education for comment. Education’s written comments are reproduced in Appendix II. Education sought a clearer distinction in the report between statutory and regulatory requirements as well as Education’s authority to address statutory requirements. We have added information accordingly. Education also recommended the report distinguish between reporting and disclosure requirements, and we have provided definitions in the background in response. Education expressed concern that the report did not sufficiently consider the benefits of federal requirements. We agree that federal requirements generally have a purpose and associated benefits—such as benefits associated with program oversight and consumer awareness— which we acknowledge in our report. Analyzing the costs and benefits associated with individual requirements was beyond the scope of this report, as our primary objective was to obtain stakeholder views on burdens. Education also suggested we report more on its efforts to balance burden and benefits when designing information collections. We acknowledged these efforts in our report and incorporated additional information that Education subsequently provided. Education also provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. We are sending copies of this report to the appropriate congressional committees and the Secretary of Education. In addition, the report is available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in Appendix V. To identify which, if any, federal requirements experts say create burden for postsecondary schools, we interviewed a range of experts. We chose these experts based on factors such as: familiarity or experience with Title IV requirements, recognition in the professional community, relevance of their published work to our topic, and recommendations from others. We conducted interviews with representatives of nine higher education associations that represent public, private nonprofit, private for- profit schools, including associations representing research universities, community colleges, and minority-serving institutions. We also conducted interviews with nine other postsecondary experts, including researchers and officials from individual schools with knowledge of Title IV requirements. Because our review focused on the burden and benefits experts say requirements create, we did not evaluate consumers’ perspectives on information schools provide. To determine the types of burdens and benefits that schools say federal requirements create, we conducted eight discussion groups at two national conferences with a nongeneralizable sample of officials from 51 schools. Discussions were guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences. To optimize time during each session, we focused part of the discussion on the perceived benefits and burdens associated with one of the two sets of requirements most often cited as burdensome during the interviews we conducted with experts: consumer disclosures and Return of Title IV Funds. Specifically, four groups primarily focused on the burdens and benefits associated with consumer disclosures and four groups focused primarily on Return of Title IV Funds. In addition, each group was provided the opportunity to discuss other requirements that officials found to be burdensome, as well as how, if at all, officials communicate feedback on burden to Education. Discussion groups are not an appropriate means to gather generalizable information about school officials’ awareness of feedback opportunities because participants were self-selected and may be more aware of federal requirements and feedback opportunities than others in the population. Methodologically, group discussions are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the discussion group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. In addition, the discussion groups may be limited because participants represented only those schools that had representatives at the specific conferences we attended and because participants are comprised of self-selected volunteers. To determine how Education solicits feedback from stakeholders on burden, we conducted interviews with Education officials and reviewed documentation, such as agency web pages and listserv postings used by Education to inform schools and other interested parties about negotiated rulemaking and information collections. We also solicited the views of experts during interviews, and asked school officials in discussion groups about how, if at all, they communicate feedback on burden to Education. Because participants were self-selected, they are more likely to be aware of federal requirements and feedback opportunities than the general population. We reviewed Education’s ICRs related to postsecondary education submitted to OMB from August 1, 2006, to October 31, 2012, to determine how many received public comments. We also reviewed the ICRs that received comments to determine how many received comments related to burden. To do so, we used OMB’s reginfo.gov website, and took steps to verify the reliability of the database. We interviewed agency officials, tested the reliability of a data field, and reviewed documentation. We found the database to be reliable for our purposes. In our review of ICRs, we included new information collections along with revisions, reinstatements, and extensions of existing information collections without changes. We excluded ICRs that agencies are not required to obtain public comment on, such as those seeking approval of nonsubstantive changes. We also excluded ICRs for which the associated documents did not allow us to interpret the comments. To determine how many ICRs received comments that discussed burden faced by schools, one analyst reviewed comments for each ICR and classified them as being related or not related to the burden faced by schools. Another analyst verified these categorizations and counts. We also reviewed the number and nature of comments on Education’s preliminary plan for retrospective analysis by downloading comments from regulations.gov. We verified with Education the total number of comments received. To determine whether comments discussed burdens faced by schools, one analyst reviewed each comment and classified it as being related or not related to higher education regulations and whether it referenced burden faced by schools. Another analyst verified these categorizations and counts. We did not review comments submitted to Education in response to proposed rules. Education has received thousands of comments in response to proposed regulations in recent years, and the site does not contain a search feature that would have allowed us to distinguish comments regarding burden estimates from other topics. For all objectives, we reviewed relevant federal laws and regulations. We conducted this performance audit from April 2012 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The table below lists some of the specific concerns expressed by school officials we spoke to in discussion groups in response to questions about burdensome federal requirements. GAO identified statutory or regulatory provisions that relate to the burdens described by school officials and compiled these summaries to better illustrate the underlying requirements about which we received comments. These are only examples, not a list of every requirement specifically reported to us as burdensome. The summaries provided below are not intended to be complete descriptions of each requirement, and additional statutory or regulatory provisions related to these comments may also apply. In some cases a provision may have multiple sources, such as where statutory requirements are further interpreted in a regulation or guidance document. Discussion Group Participant Concern Consumer Disclosures: This category encompasses a number of different federal requirements to collect information on various topics and make that information available to specified groups or entities. Students, prospective students, and others can use this information to be better informed. The information can help people make decisions such as whether or not to attend or seek employment at a school. Summary of Related Federal Provisions The statute and regulations require eligible institutions to collect certain information on campus crime statistics and security policies and prepare, publish, and distribute an annual security report to all current students and employees (and to any prospective student or employee upon request). The report must contain, among other information, statistics on certain crimes reported to campus security authorities or local police agencies. 20 U.S.C. § 1092(f)(1)(F), 34 C.F.R. §§ 668.41(e), 668.46. The regulations require that an institution “make a reasonable, good faith effort to obtain the required statistics” and may rely on information supplied by a local or state police agency. “If the institution makes such a reasonable, good faith effort, it is not responsible for the failure of the local or State police agency to supply the required statistics.” 34 C.F.R. § 668.46(c)(9). Discussion Group Participant Concern Placement rates. Placement rate calculations are different for different schools or within schools and confusing to students, requiring school staff to give additional explanation to some data. Summary of Related Federal Provisions The statute requires that institutions produce and make readily available upon request—through appropriate publications, mailings, and electronic media—to an enrolled student and to any prospective student the placement in employment of, and types of employment obtained by, graduates of the institution’s degree or certificate programs, gathered from such sources as alumni surveys, student satisfaction surveys, the National Survey of Student Engagement, the Community College Survey of Student Engagement, State data systems, or other relevant sources. 20 U.S.C. § 1092(a)(1)(R). According to the regulations, information concerning the placement of, and types of employment obtained by, graduates of the institution’s degree or certificate programs may be gathered from: (1) the institution’s placement rate for any program, if it calculates such a rate; (2) state data systems; (3) alumni or student satisfaction surveys; or (4) other relevant sources. The institution must identify the source of the information provided, as well as any time frames and methodology associated with it. In addition, the institution must disclose any placement rates it calculates. 34 C.F.R. § 668.41(d)(5). Return of Title IV Funds: In general, if a recipient of Title IV grant or loan assistance withdraws from an institution, the statute and regulations establish a procedure for calculating and returning unearned funds. Returning these funds can protect the interests of the federal government and the borrower. The statute provides that, for institutions required to take attendance, the day of withdrawal is determined by the institution from such attendance records. 20 U.S.C. § 1091b(c)(1)(B). The regulations prescribe in further detail which institutions are required to take attendance and how to determine the withdrawal date: For a student who ceases attendance at an institution that is required to take attendance, including a student who does not return from an approved leave of absence, or a student who takes a leave of absence that does not meet the regulatory requirements, the student’s withdrawal date is the last date of academic attendance as determined by the institution from its attendance records. 34 C.F.R. § 668.22(b). “Institutions that are required to take attendance are expected to have a procedure in place for routinely monitoring attendance records to determine in a timely manner when a student withdraws. Except in unusual instances, the date of the institution’s determination that the student withdrew should be no later than 14 days (less if the school has a policy requiring determination in fewer than 14 days) after the student’s last date of attendance as determined by the institution from its attendance records.” Federal Student Aid Handbook, June 2012, and Education “Dear Colleague Letters” GEN-04-03 Revised, Nov. 2004, and DCL GEN-11-14, July 20, 2011. Summary of Related Federal Provisions An institution is required to return any unearned Title IV funds it is responsible for returning within 45 days of the date the school determined the student withdrew. 20 U.S.C. § 1091b(b)(1), 34 C.F.R. §§ 668.22(j)(1), 668.173(b). For a student who withdraws from a school that is not required to take attendance without providing notification, the school must determine the withdrawal date no later than 30 days after the end of the earlier of (1) the payment period or the period of enrollment (as applicable), (2) the academic year, or (3) the student’s educational program. 34 C.F.R. § 668.22(j)(2). “If a student who began attendance and has not officially withdrawn fails to earn a passing grade in at least one course over an entire period, the institution must assume, for Title IV purposes, that the student has unofficially withdrawn, unless the institution can document that the student completed the period. “In some cases, a school may use its policy for awarding or reporting final grades to determine whether a student who failed to earn a passing grade in any of his or her classes completed the period. For example, a school might have an official grading policy that provides instructors with the ability to differentiate between those students who complete the course but failed to achieve the course objectives and those students who did not complete the course. If so, the institution may use its academic policy for awarding final grades to determine that a student who did not receive at least one passing grade nevertheless completed the period. Another school might require instructors to report, for all students awarded a non- passing grade, the student’s last day of attendance (LDA). The school may use this information to determine whether a student who received all “F” grades withdrew. If one instructor reports that the student attended through the end of the period, then the student is not a withdrawal. In the absence of evidence of a last day of attendance at an academically related activity, a school must consider a student who failed to earn a passing grade in all classes to be an unofficial withdrawal.” Federal Student Aid Handbook, June 2012, and Education “Dear Colleague Letter” GEN-04-03 Revised, Nov. 2004. All references to “statute” or “regulations” are references to the Higher Education Act of 1965 (HEA), as amended, and Education’s implementing regulations. All references to “eligible institutions” refer to eligible institutions participating in Title IV programs, as defined by the HEA, as amended. Postsecondary schools may be subject to numerous federal requirements in addition to those related to Title IV of the Higher Education Act of 1965, as amended, which may be established by various other statutes or regulations promulgated by different agencies. The specific requirements to which an individual school is subject may depend on a variety of factors, such as whether it conducts certain kinds of research or is tax- exempt (see the following examples). This is not intended to be a comprehensive list; rather the examples were selected to represent the variety of types of requirements to which schools may be subject. Nuclear Research: Schools licensed to conduct medical research using nuclear byproduct material must follow Nuclear Regulatory Commission requirements on safety and security, or compatible requirements issued by a state that has entered into an agreement with the Nuclear Regulatory Commission. Schools that house nuclear reactors for research purposes are also subject to additional regulations, including those on emergency management. Research Misconduct: To receive federal funding under the Public Health Service Act for biomedical or behavioral research, institutions (including colleges and universities) must have written policies and procedures for addressing research misconduct and must submit an annual compliance report to the federal government. The Public Health Service has issued regulations detailing institutions’ responsibilities in complying with these requirements. Research on animals: Applicants for funding for biomedical or behavioral research under the Public Health Service Act must provide an assurance to the National Institutes of Health that the research entity complies with the Animal Welfare Act and the Public Health Service Policy on Humane Care and Use of Laboratory Animals, and that it has appointed an appropriate oversight committee (an Institutional Animal Care and Use Committee). The oversight committee must review the care and treatment of animals in all animal study areas and facilities of the research entity at least semi-annually to ensure compliance with the Policy. Employment Discrimination: Title VII of the Civil Rights Act of 1964, as amended, prohibits employment practices that discriminate based on race, color, religion, sex and national origin. These requirements apply to schools that qualify as employers as defined by Title VII, generally including private and state or local employers that employ 15 or more employees. Disabilities. The Americans with Disabilities Act of 1990 prohibits discrimination against individuals with disabilities in several areas, including employment, state and local government activities, and public accommodations. Act of 1973, as amended, prohibits discrimination on the basis of disability under any program or activity that receives federal financial assistance. Colleges, universities, other postsecondary institutions, and public institutions of higher education are subject to these requirements. In addition, section 504 of the Rehabilitation Sex Discrimination. Title IX of the Education Amendments of 1972 prohibits discrimination on the basis of sex in any federally funded education program or activity. Title IX applies, with a few specific exceptions, to all aspects of education programs or activities that receive federal financial assistance, including athletics. Byrd Amendment: Educational institutions that receive federal funds must hold an annual educational program on the U.S. Constitution. 42 U.S.C. §§ 12101–12213. Different agencies administer different aspects of the Americans with Disabilities Act, including the Equal Employment Opportunity Commission and the Department of Justice. Internal Revenue Service Form 990: Schools that have tax-exempt status generally must annually file IRS Form 990. The form requires a range of information on the organization’s exempt and other activities, finances, governance, compliance with certain federal tax requirements, and compensation paid to certain persons. In addition to the contact named above, Bryon Gordon (Assistant Director), Debra Prescott (Assistant Director), Anna Bonelli, Joy Myers, and Daren Sweeney made key contributions to this report. Additionally, Deborah Bland, Kate Blumenreich, Tim Bober, Sarah Cornetto, Holly Dye, Kathleen van Gelder, and Elizabeth Wood aided in this assignment. | Postsecondary schools must comply with a variety of federal requirements to participate in student financial aid programs authorized under Title IV. While these requirements offer potential benefits to schools, students, and taxpayers, questions have been raised as to whether they may also distract schools from their primary mission of educating students. GAO examined (1) which requirements, if any, experts say create burden, (2) the types of burdens and benefits schools say requirements create, and (3) how Education solicits feedback from stakeholders on regulatory burden. GAO reviewed relevant federal regulatory and statutory requirements, and past and ongoing efforts examining postsecondary regulatory burden; interviewed Education officials and 18 experts, including officials from associations that represent postsecondary schools; and conducted eight discussion groups at two national conferences with a nongeneralizable sample of 51 school officials from public, nonprofit, and for-profit sectors. GAO also reviewed documentation associated with Education's requests for public comment on burden for proposed postsecondary information collections and its retrospective analysis of regulations. Experts GAO interviewed offered varied opinions on which student financial aid requirements under Title IV of the Higher Education Act of 1965, as amended, are the most burdensome. While no single requirement was cited as burdensome by a majority of the 18 experts, 11 cited various consumer disclosure requirements--such as those pertaining to campus safety--primarily due to the time and difficulty needed to gather the information. Beyond consumer disclosures, 4 experts cited "Return of Title IV Funds"--which requires schools to calculate and return unearned financial aid to the federal government when a recipient withdraws from school--as burdensome because schools find it difficult to calculate the precise amount of funds that should be returned. More broadly, 6 experts said that the cumulative burden of multiple requirements is a substantial challenge. Experts also noted some benefits. For example, an expert said required loan disclosures help students understand their repayment responsibilities. School officials who participated in each of the eight discussion groups GAO conducted expressed similar views about the types of burdens and benefits associated with Title IV requirements. Participants in all groups said requirements for consumer disclosures and Return of Title IV Funds are costly and complicated. Regarding consumer disclosures, participants questioned the value of disclosing data that cannot be readily compared across schools, like data on graduates' employment, which may be calculated using different methodologies. Participants in four groups found Return of Title IV Funds requirements difficult to complete within the required time frame. Participants also cited some benefits, such as how consumer disclosures can help applicants choose the right school and unearned Title IV funds can be redirected to other students. Education seeks feedback from schools on regulatory burden mainly through formal channels, such as announcements posted in the Federal Register, on its website, and on a department listserv. However, Education officials said they have received a limited number of comments about burden in response to these announcements. GAO reviewed Education's notices soliciting public comments on burden estimates for its postsecondary information collections--which require the public, including schools, to submit or publish specified data--and found that 65 of 353 notices (18 percent) received comments, of which 25 received comments related to burden. For example, 2 notices received input on the difficulties of providing data requested by the department. GAO makes no recommendations in this report. In its comments, Education sought clarification regarding types of federal requirements and additional information on its efforts to balance burden and benefits. We provided clarifications and additional information, as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Personal Responsibility and Work Opportunity Reconciliation Act (P.L. 104-193), enacted in August 1996, overhauled the nation’s welfare system. Although some states were already implementing changes to their welfare programs before this legislation, the act abolished the federal Aid to Families With Dependent Children program and established TANF block grants, which imposed stronger work requirements for welfare recipients than its predecessor program. TANF provides benefits for a time-limited period and focuses on quickly putting individuals to work. The TANF block grants available to states totaled about $16.6 billion in fiscal year 1998—ranging from about $21.8 million in Wyoming to over $3.7 billion in California. To receive their TANF grants, states must maintain funding for needy families at specified levels tied to their historical expenditures on welfare programs. The Balanced Budget Act of 1997 authorized $3 billion for welfare-to-work grants to state (the 50 states, the District of Columbia, Guam, Puerto Rico, and the Virgin Islands) and local communities to move welfare recipients into jobs—$1.5 billion is available to be awarded by Labor each year in fiscal years 1998 and 1999. A small amount of the total grant money was set aside for special purposes: 1 percent for Native American tribes ($15 million for each year), 0.8 percent for evaluation ($12 million for each year), and $100 million in fiscal year 1999 for performance bonuses to states that successfully move welfare recipients into employment. After these set-asides, Labor allocated 75 percent (about $1.1 billion for fiscal year 1998) of the welfare-to-work funds to states on the basis of a formula that equally considers the shares of individuals with incomes below the poverty level and adult recipients of TANF assistance residing in the state. States must pledge one dollar of nonfederal funding to match every two dollars of federal funding provided under the formula; up to half of the match may consist of third-party in-kind contributions. The state welfare-to-work matching funds are in addition to the state funds that must be expended as required under TANF block grants. Funds not allocated by formula, which are nearly 25 percent of the welfare-to-work funds (over $368 million for fiscal year 1998), were available for Labor to award competitively to local organizations. These organizations—local governments, Private Industry Councils, and private organizations that apply in conjunction with a Private Industry Council or local government—submit applications to Labor describing how they plan to use welfare-to-work funds. In addition to giving special consideration to cities with large concentrations of poverty and to rural areas, Labor reviews applications and awards competitive grants using the following criteria: the relative need for assistance in the area proposed to be served; the extent to which the project proposes innovative strategies for moving welfare recipients into lasting work; the quality of the proposed outcomes of the project; the degree to which the project is coordinated with other services; and the demonstrated ability of the grant applicant. To receive its allocation of welfare-to-work formula funds, a state was required to submit a plan for the use and administration of the grant funds to Labor. The Secretary of Labor then determined whether the plan met the statutory requirements, including assurances that the plan was developed with coordination from appropriate entities in substate areas and that welfare-to-work programs and funds would be coordinated with programs funded through the TANF block grants. Using an allocation formula developed by the state, 85 percent of the state’s federal formula funds were to be passed to local Private Industry Councils. The Private Industry Councils have policy-making responsibility in these service delivery areas and administer the welfare-to-work programs at the local level unless the Secretary of Labor approves a governor’s request to use an alternative administering agency. The remaining 15 percent of the state’s formula allotment may be spent on welfare-to-work projects of the state’s choice, which is described in this report as the governor’s discretionary fund. States establish their own formula for allocating formula funds to Private Industry Councils for local service delivery areas but must give a minimum weight of 50 percent to the number of people in the area in excess of 7.5 percent of the population whose income is below the poverty level. States may also consider the local area’s proportion of the state’s long-term welfare population or the state’s unemployed population. Additionally, if the amount to be allocated by formula to a local service delivery area is less than $100,000, that money may be held by the state and added to the 15 percent governor’s discretionary funds. Labor was required to obligate the fiscal year 1998 formula grant funds by September 30, 1998; however, funds for the competitive grants were multiyear, and Labor could obligate those funds into fiscal year 1999. Both formula and competitive grants must be spent within 3 years of the grant award. Under the Balanced Budget Act, the welfare-to-work grants were initially legislated as multiyear allocations that could be awarded any time in fiscal years 1998 and 1999, but this law was amended to require that Labor award formula funds available for fiscal year 1998 by September 30, 1998. If at the end of any fiscal year states have not applied for or have applied for less than the maximum amount available for formula funds, the funds are to be transferred to the General Fund of the U.S. Treasury. Competitive grant funds, however, remain multiyear funds, and there is no requirement to obligate funds for fiscal year 1998 within the fiscal year. Grantees have flexibility in designing welfare-to-work strategies geared to the needs of their own local populations and labor markets. Overall, welfare-to-work program services help individuals get and keep unsubsidized employment. Allowable activities include job readiness and placement services financed through vouchers or contracts; community service or work experience; job creation through public sector or private sector employment wage subsidies; and on-the-job training, postemployment services financed through vouchers or contracts, and job retention and support services. Both formula and competitive grant funds are to be used for certain TANF families—recipients on long-term welfare assistance, TANF recipients with characteristics of long-term welfare dependence, and/or their noncustodial counterparts. These people are considered hard to employ and may have low educational attainment or poor work histories. The law requires that at least 70 percent of the funds be spent on the hardest to serve long-term welfare recipients with two of three specified barriers to successful employment. Up to 30 percent of the grant funds may be spent on individuals with characteristics of long-term welfare recipients; these characteristics could include dropping out of school, teenage pregnancy, or poor work history. Under either the 70- or 30-percent category, noncustodial parents with dependents receiving TANF assistance may qualify for welfare-to-work activities. (See table 1 for a summary of eligibility requirements for welfare-to-work services.) Labor awarded about $1 billion in formula grants for fiscal year 1998 to all but six states. The six states that chose not to participate in the formula grant program would have received about $71 million. Labor also awarded a total of almost $500 million in competitive grants using all of the approximately $368 million in competitive grant funds available for fiscal year 1998 and about a third of the competitive grant funds available for fiscal year 1999. Most states applied for and received their full allocation of formula grant funds. (See app. II for the amount of formula funds awarded, by state, for fiscal year 1998.) Of the states that applied for formula grant funding, Arizona was the only state that did not pledge sufficient matching funds to receive its maximum federal allocation. Of the states that declined to participate in the welfare-to-work program, four states did not submit a welfare-to-work plan to Labor and the remaining states informed Labor that they would not participate in the formula grant program. These six states chose not to participate for various reasons, including concerns about their ability to provide state matching funds. Arizona needed about $9 million in matching funds to obtain its full allocation of about $17 million in federal welfare-to-work funds; however, the state legislature was not willing to provide this amount in matching funds. Instead, the state assured a match of $4.5 million and obtained a formula grant for $9 million in fiscal year 1998. Initially, Arizona asked the local service delivery areas to determine whether they could raise the required matching funds; however, the local areas, while they wanted the welfare-to-work funding, did not believe they could raise the matching funds locally. Of the six states that declined to participate in the welfare-to-work formula program, four states—Idaho, Mississippi, South Dakota, and Wyoming—neither informed Labor they would not be participating in welfare-to-work, nor submitted a welfare-to-work plan to Labor; the remaining states—Ohio and Utah—informed Labor that they would not participate. Ohio initially applied for its welfare-to-work allocation, but the governor later decided the grant was too complex and burdensome, especially the match requirement. Since Ohio had excess, unobligated TANF funds, state officials believed the TANF funds should be used to move welfare recipients to work—especially because there were no matching requirements and the eligibility requirements were less restrictive. Utah sent a letter declining its allocation, listing two reasons for its decision—that the state believed the formula funding was too restrictive regarding participant eligibility and that it believed the welfare-to-work grants were too prescriptive and did not allow the state enough flexibility. A state official in Utah also said that, at the time the letter was sent, officials believed TANF funds were sufficient to serve the TANF population’s needs; furthermore, the funds required a state match, which did not seem feasible at the time. The states that did not apply for welfare-to-work funds had various reasons for not participating. For example, an official in Idaho noted that the state’s TANF caseload had dropped precipitously, consequently the state had adequate TANF funds to meet the employment and training needs of the remaining welfare recipients. The official also estimated that no more than about 350 of the state’s welfare recipients were eligible for welfare-to-work services—and perhaps as few as 100. A state official in Mississippi said that a significant amount of TANF funds had been budgeted for job skills development and job search. Additionally, the state had set aside 30 percent of enrollments in JTPA for welfare recipients and was having difficulty filling these slots. Consequently, in addition to concerns about the state’s ability to provide matching funds, the state decided against applying for welfare-to-work funds. The six states may still apply for fiscal year 1999 funds and have until March 1999 to do so. As of November 20, 1998, Labor had awarded a total of 126 competitive grants. On May 27, 1998, Labor announced the first round of competitive grants, which resulted in awards of about $200 million—approximately half of the fiscal year 1998 welfare-to-work competitive grant funds—to 51 local organizations. On November 20, 1998, Labor awarded the second round of competitive grants to 75 local organizations; these grants totaled about $273 million and represented combined competitive grant funds from the remainder of fiscal year 1998 funds and a portion of the fiscal year 1999 funds. (See apps. III and IV for a list of the first and second rounds of competitive grants awarded, by state.) Most states had at least one local service organization that received competitive grant funds. (See table 2 for the distribution of welfare-to-work competitive grants awarded by Labor.) Three states that we reviewed targeted a specific population for formula grant funds, while the other three states defined their welfare-to-work focus more broadly and did not emphasize a specific service strategy or targeted population. In the six states, local communities targeted populations and designed their welfare-to-work activities consistent with their state’s plan. Competitive grants focused more narrowly on a specific population and activity. Three of the six states we reviewed—Massachusetts, Michigan, and Wisconsin—specified populations to be served with formula grant funds, such as assistance to unemployed noncustodial parents or TANF recipients who are reaching their time limits on cash assistance. Plans for the other three states—Arizona, California, and New York—stated that the use of welfare-to-work funds would be determined by the local service delivery areas. (See apps. V through X for a brief description of the formula grant plans in each of the six states.) In the six states, the local plans we reviewed proposed a range of welfare-to-work activities for eligible participants. Three states planned a specific statewide focus for formula grant funds. For example, Michigan’s plan emphasized serving unemployed noncustodial parents who have child support payments in arrears and whose dependents are receiving TANF assistance. The goal was to increase payments by these noncustodial parents for child support. Not participating in the welfare-to-work program has serious consequences—incarceration—unless there is good cause for nonparticipation. Michigan required local service delivery areas to devote 50 percent of their welfare-to-work grant funds to assist noncustodial parents. Wisconsin’s plan also emphasized serving noncustodial parents, and because its TANF caseload is low, the state also proposed to assist individuals receiving only TANF child care subsidies. Massachusetts planned on serving TANF recipients who are reaching their 24-month limit for receiving cash assistance—about 7,000 were expected to lose cash assistance benefits on December 1, 1998. In contrast, three states defined their formula grant focus more broadly and did not emphasize a specific service strategy. California’s state plan noted that—given the diversity of the state’s local service delivery areas—no one service strategy could be effectively applied statewide. Arizona’s plan outlined the state’s support to local service delivery areas in their efforts to target welfare-to-work services to hard-to-serve TANF recipients, noncustodial parents, and other eligible individuals. New York’s plan provided a general welfare-to-work focus on improving the connection to work, although the state plan placed some emphasis on serving individuals with disabilities; many of these individuals have experienced long-term welfare dependency and had been exempt from work requirements under Aid to Families With Dependent Children but are no longer exempt under the state’s TANF program. The local plans we reviewed proposed a range of activities for their formula grant allocations. Because welfare-to-work programs are administered locally, state officials in the six states we reviewed said local entities have the ability to design welfare-to-work activities and target populations within the parameters of the state plan. For example, the New York state plan did not define, beyond the federal welfare-to-work eligibility requirements, the population to be served with formula funds, and state officials said that different local plans emphasized different activities, such as mentoring, case management, training to upgrade employment, literacy, and career ladder development. The officials also noted that local service delivery areas considered the services funded by TANF and proposed to focus formula grant funds on areas where services were lacking. In states with a focus on serving a targeted population with formula grant funds, local service delivery areas focused on these objectives in their welfare-to-work plans. For example, in Massachusetts, local service delivery areas, following the state’s direction, will provide services to TANF recipients facing time limits on cash assistance. Likewise, a service delivery area in Michigan will identify its welfare-to-work participants through the Family Independence Agency, which is the TANF agency, and the Friend of the Court, which refers noncustodial parents. However, focusing on the needs of its own local population, this service delivery area also plans to serve several other populations whose characteristics are associated with or predictive of long-term welfare dependency, such as rural isolation, substance abuse, homelessness, being a single parent, or being an offender. For states leaving more discretion to local service delivery areas in planning their strategies for the use of formula grant funds, some local areas designed their welfare-to-work activities to complement existing employment delivery systems. For example, in the San Diego, California, service delivery area, about 3,000 long-term welfare recipients will receive a package of services, for about 18 to 24 months, designed to meet their needs, which will include at least 16 hours a week of work activities and up to 16 hours a week of support services. These services are provided by competitively procured contractors, and each contract includes an incentive program to move participants into work expeditiously. Some local plans emphasized new approaches for moving welfare recipients to work. For example, local officials in Phoenix, Arizona, plan to use formula grant funds to develop new relationships with large businesses that will receive consulting services in exchange for hiring welfare recipients; the welfare-to-work participants will receive job readiness training as well as mentoring and job coaching after they are hired to improve their chances of job retention. Local officials we interviewed said service delivery areas planned to use formula grant funds particularly to provide postemployment services. For example, in New York’s Oneida-Herkimer-Madison service delivery area, the welfare-to-work program is based on using employment retention specialists who will provide 24-hour support service to participants. A third of the area’s formula funds will be spent on the 6-person employment retention staff; smaller amounts of the formula funds were allocated for services such as transportation and child care because the program hopes to use existing programs and resources for these services. Even with its focus on job retention services, the local service delivery area will maintain a menu of services so that it can provide all services to clients as needed. The welfare-to-work program for a local area in Massachusetts represents another example of providing postemployment services with formula grant funds. This welfare-to-work program planned to provide support after job placement for up to 6 months rather than the 30 to 60 days that other employment and training programs generally provide participants. At the time of our review, this program had placed about 10 of the 70 current participants in jobs, and these employed participants were receiving services such as mentoring and case management. A program official noted that, until a participant finds a job, the local career center provides most services; however, once the participant finds a job, the career center’s role diminishes, and participants primarily are served through the welfare-to-work program since it can provide postemployment services. The local area is still developing community resource teams to help TANF recipients manage their lives. The official explained that, once placed in jobs, welfare-to-work participants might fail to report to work if they are sick or if they cannot obtain child care. Ideally, the community resource teams would help individuals find resources to assist them with these situations without losing their jobs. The proposed use of the governor’s discretionary portion of state formula funds (up to 15 percent of the formula funds) generally followed the states’ welfare-to-work initiatives. States that targeted populations for welfare-to-work activities used discretionary funds for those individuals. For example, Michigan distributed its discretionary funds (about $6 million) to the local areas in order to provide more funding to serve noncustodial parents. In Wisconsin, the discretionary funds (about $2 million) will be used for a variety of purposes; however, the largest portion of the discretionary funds (about $1.1 million) will be allocated to the state’s Department of Corrections to provide employment assistance to noncustodial parents in correctional institutions, on parole, or on probation. In Massachusetts, which emphasized assistance to TANF recipients facing time limits on cash assistance, the state planned to allocate over half of its about $3 million in discretionary funding to the Department of Transitional Assistance to supplement its program of assessment and structured employment assistance. Massachusetts also planned to subsidize five local areas that were allocated the lowest amount of formula funds. The state used these funds to provide a minimum of $400,000 to each area because state officials believed that local areas needed this level of funding to have an effective welfare-to-work program. For states that had a broader focus for their formula funds, plans for the governor’s discretionary funds were analogous with those for local areas given wider discretion for the use of these funds. In California, the state distributed the governor’s discretionary funds (about $29 million) primarily through a competitive process—special consideration was given to a broad array of programs that addressed needs in rural areas; leveraged other resources; and demonstrated an innovative, coordinated approach to services. Of New York’s discretionary funds (over $14 million), the state planned to use about 70 percent to support varied services—also on a competitive basis—to move individuals into employment and provide postemployment services to help working participants continue to work and increase earnings. Finally, Arizona combined its discretionary funds (about $1.4 million) with allocations made to the local service delivery areas but did not emphasize service to a specific population as did Michigan. The plans for competitive grants we reviewed in the six states focused on specific populations and activities. The competitive grantees proposed a variety of different activities and targeted different populations under this program for innovative approaches. Some of the welfare-to-work competitive grants will be used to complement programs funded by local formula allocations, and others will function separately from the local formula grant but rely on the same systems as the formula grantees to verify welfare-to-work eligibility. Several competitive grants will complement formula grant programs. For example, Phoenix planned to use its formula funds to assist participants in gaining employment with large businesses, while its competitive grant will be used to link participants with small businesses. The same approach will be used for both programs. Using both formula and competitive grant funds, EARN, an acronym for Employment and Respect Now, will assess and screen participants for drug use, then enroll them in a 5-week job readiness program that includes some computer-based training. For the competitive grant, these participants will be placed in employment among 900 small businesses that receive tax credits for employing them. Throughout the participant’s work experience, EARN staff and volunteers will provide mentoring, job coaching, and other services for job retention. Similarly, Detroit’s competitive grant will be used to complement its basic program of assisting all individuals in obtaining employment by providing more intense services for the hardest-to-employ population. Transportation to work sites is often a critical problem for welfare-to-work participants, and a portion of the competitive grant will be used to fund a demonstration project called Easy Ride that will purchase several alternative fuel vehicles and employ a person to coordinate transportation schedules for welfare-to-work participants. Additionally, the competitive grant in Detroit will provide more intense job readiness training such as substance abuse counseling and classes for English-as-a-Second-Language. The Metropolitan Area Planning Commission in Boston also planned to use its competitive grant funds to complement the area’s formula grant programs by developing a transportation program to help individuals get to work. An “Access to Jobs” study found specific gaps in transportation services that hampered individuals from obtaining employment. The study found that people either had no available public transportation, had to make multiple trips to get from their residence to their work site, or simply did not know how to make the trip. The Commission will work to connect city residents to suburban jobs, and suburban residents to jobs in other suburbs or the city. The program will assist people served by the formula grant programs and will provide (1) information about transportation modes, schedules, and day care sites near transportation; (2) direct assistance, such as subsidies for public transportation; and (3) an emergency fund for unanticipated transportation needs, allocated on a case-by-case basis. For example, if someone is not served by public transportation but has a car in need of repairs, the fund could be used to keep this individual’s car in running order. Other competitive grants will function separately from the local formula grant but rely on the same systems to verify welfare-to-work eligibility as the formula grantees—the welfare offices or the court system. For example, Oakland, California, will use its competitive grant to expand its pilot program to train and place Head Start parents in jobs. The program staff hope to identify participants who are noncustodial parents or who have substance abuse problems, but, similar to the welfare-to-work eligibility determination for the local formula grant, the staff will also submit a list of interested Head Start parents to the county welfare agency to verify TANF status. The Private Industry Council of Milwaukee County will provide legal assistance to long-term welfare clients and noncustodial parents whose legal problems—combined with poor academic and work skills—are barriers to employment. For its competitive grant, the Private Industry Council plans to serve 200 long-term TANF recipients (primarily women) and 450 noncustodial parents (primarily men) identified by the welfare agency or the court system—this is the same way that the Private Industry Council will determine welfare-to-work eligibility for participants served by the local formula grant. The competitive grant will be used to provide legal advocacy and case management to participants, track individuals who drop out of the program and try to reintegrate them, and develop a process that will place a randomly selected group of noncustodial parents in unsubsidized or subsidized employment. This process will require that placement firms pay for the subsidized employment, thus providing the firms with incentives for finding jobs for their clients. In New York City, the Consortium for Worker Education will use its competitive grant to train and assist women to provide child care from their homes as satellites for private sector child care centers. The Consortium planned to build on its concept of both putting welfare recipients to work by providing child care in their homes and creating needed child care slots for workers in New York City. Recruitment for the program will be managed by two vendors who will advertise, hold presentations at community centers, and obtain referrals from the city welfare department. Once recruited, participants will be assessed and interviewed. For those selected for the program, their welfare-to-work eligibility will be determined by the city’s TANF agency, which is also the administrative entity for the city’s local formula grant. Those deemed eligible must then have their homes inspected for compliance with city building and health codes. Once accepted, the Consortium will enroll participants in a 2-week job readiness program followed by a 16-week Work Experience Program. Participants will spend 60 percent of their work experience working in a day care center and 40 percent in classroom training. When individuals have successfully completed their work experience, they will be hired by the parent company, Satellite Child Care, Inc. The provider’s home will then be opened as a satellite child care center, and the provider will receive a $4,000 kit containing various equipment, including a computer package that has software for children and distance learning capabilities so the provider can receive continued instruction. The providers will receive on-going supervision and home visits from the parent company. State and local officials in the six states we reviewed noted that a stronger partnership was developing between the workforce development agencies and other human service agencies assisting welfare recipients. They attributed this stronger relationship, at least in part, to their joint involvement in the welfare-to-work planning process. At the state level, each of the six states we reviewed had developed a partnership steering committee, task force, or work group to develop the states’ plans for formula grant funds and had identified ways to promote integration between the workforce development and human service agencies for welfare recipients at the local level. Furthermore, recipients of competitive grant funds also coordinated their plans with state and local officials. The six states we reviewed had developed mechanisms to coordinate welfare-to-work activities with services to the hard-to-employ population. For example, in Massachusetts, an intergovernmental state steering committee prepared the state plan for formula grant funds and continues to respond to technical questions raised by local service delivery areas regarding implementation of welfare-to-work programs. The welfare-to-work stakeholders included representatives from the Department of Labor and Workforce Development; the Corporation for Business, Work and Learning; the Executive Office of Health and Human Services; the Department of Transitional Assistance, which is the state TANF agency; the Regional Employment Board Association; the Service Delivery Area Association; the Career Center Office; and the Division of Employment and Training. By planning and working together, this group shares information in order to minimize duplication of effort between state agencies and with the local service delivery areas. In California, planning for formula grant funds and coordination between the California Employment Development Department and the state’s Department of Social Services began as soon as the welfare-to-work program was introduced by Labor. Both departments are within California’s Health and Welfare Agency, and, even before welfare-to-work legislation, these departments had formed a coordination committee—CalWORKS—to discuss issues regarding the state’s effort to move welfare recipients into employment. At the state level, California has an interdepartmental work group that includes representatives of agencies responsible for education, transportation, housing, community services, mental health, and job services. In all, the work group includes 15 state departments responsible for 20 different programs. The state had also implemented one-stop career centers and had adopted a policy that would make the county welfare departments part of the one-stop system. The state further emphasized collaboration by holding five public hearings on the draft state plan to elicit comments from local service delivery areas and by posting its plan on the Internet to obtain public comment. The formula grant plans for the six states we reviewed required coordination between the workforce development and welfare agencies at the local level. For example, California required that local plans for formula funds also be approved by the county welfare department. In New York, state officials developed guidelines for local formula grant proposals that required the Private Industry Councils and area social services districts to develop a written welfare-to-work operational agreement to detail respective roles, responsibilities, and procedures within the service delivery area. At the local level, partnerships were formed to coordinate welfare-to-work activities provided by the local workforce development agencies with other human services for welfare recipients. For example, a community task force in Flagstaff, Arizona, was formed with representation from 64 state and local agencies in the service delivery area that were involved with moving individuals from welfare to work. Together, these stakeholders developed a matrix, listing each organization and the services offered to welfare recipients, to leverage resources and minimize duplication of effort. In Michigan, an official representing a local service delivery area noted that because the TANF population is the hardest to employ, she relies heavily on the expertise of the Michigan Rehabilitation Services for assistance regarding participants with more serious impediments to employment, such as substance abuse or mental illness. Additionally, because local service providers in Michigan focus on noncustodial parents, collaborative efforts with the court system are vital for identifying this population; the Family Independence Agency, which is the TANF agency, is also an important welfare-to-work partner in identifying TANF-eligible recipients. According to officials in Wisconsin, implementation of formula grant programs at the local level is a joint project between the local workforce development agency and the local TANF offices. This coordination allows the welfare-to-work funds to be used to expand on services provided by TANF funds, thus avoiding duplication of effort in service delivery. For the welfare-to-work competitive grants we reviewed, competitive grantees also coordinated their plans with state and local officials. For example, the competitive grant awarded in Merced, California, is planned for use in assisting welfare-to-work participants in becoming self-employed, and a strong aspect of this program is its collaboration with various partners. The program, which primarily targets noncustodial parents and public housing residents, has a coalition of partners including the Merced County Community Action Agency, Employment Development Department of Merced County, Merced County Private Industry Council/Private Industry Training Department, Merced County Human Services Agency, Housing Authority of the County of Merced, and chambers of commerce throughout the county. In several states we reviewed, the competitive grants were awarded to the same or similar entities that received a formula grant; consequently, the competitive grant linked significantly with the welfare-to-work program established under the formula grant. In these cases, the competitive grant funds were generally used to provide the more intensive services needed to help welfare recipients get and keep jobs. We provided a draft of this report to the Department of Labor for comment. Labor provided technical comments, which we incorporated in the report where appropriate. We are sending copies of this report to the Secretary of Labor and other interested parties. Copies also will be made available to others upon request. If you have any questions about this report, please contact me at (202) 512-7014. Major contributors to this report include Sigurd R. Nilsen, Betty S. Clark, and Carolyn D. Hall. To address the request, we reviewed the legislation authorizing welfare-to-work grants and the implementing regulations. We met with Labor officials who administer the grants and obtained information on the formula grants Labor awarded for welfare-to-work funds available for fiscal year 1998. We also obtained information about the competitive grants Labor awarded on May 27, 1998, and November 20, 1998, with welfare-to-work funds available for fiscal years 1998 and 1999. We interviewed state and local officials in six states—Arizona, California, Massachusetts, Michigan, New York, and Wisconsin—to obtain information on their plans for the welfare-to-work grant funding. We selected four of these states to take advantage of site visits made and information collected for a concurrent GAO study on states’ experiences in providing employment and training assistance to TANF clients. For this report, we conducted field visits in states that were early implementers of welfare reform and of workforce development program consolidation. Additionally, we included two other states—California and New York—in our study because they have the largest welfare caseloads. For each of the six states, we reviewed the state welfare-to-work plan, interviewed program officials for at least two selected local service delivery areas receiving allocations of the states’ formula grant funds, and interviewed one grantee that was awarded competitive grant funds. We also telephoned officials in the states that declined or did not apply for welfare-to-work grants to obtain information on the reasons for these decisions. We performed our work from May 1998 to December 1998 in accordance with generally accepted government auditing standards. Percentage of total 1998 federal welfare-to-work funds awarded1998 federal welfare-to-work funds declined (continued) Percentage of total 1998 federal welfare-to-work funds awarded1998 federal welfare-to-work funds declined (Table notes on next page) Totals may not add because of rounding. This amount is based on the full, available amount of fiscal year 1998 federal formula funding, $1,104,750,000. According to a Labor official, $78,962,342 of this amount was not awarded and was returned to the U.S. Treasury by Labor. United Way of Central Alabama City of Phoenix Human Services Department, Employment and Training Division The City of Little Rock Los Angeles Private Industry Council Private Industry Council of San Francisco, Inc. Housing Authority of the City of Los Angeles Merced Self-Employment and Job Opportunity Coalition City of Oakland, Office of Aging Riverside County Economic Development Agency Rocky Mountain Service/Jobs for Progress, Inc. The WorkPlace, Inc. Washington, D.C. Mayor’s Office of Citizens Employment and Training, Atlanta Goodwill Industries of Middle Georgia (continued) City of Chicago, the Chicago Workforce Board River Valley Resources, Inc. Madison Louisville and Jefferson County Private Industry Council City of Detroit Employment and Training Department City of Kalamazoo - Metro Transit System Catholic Social Services of Albuquerque, Inc. The Corporation for Ohio Appalachian Development Private Industry Council of Philadelphia, Inc. Resources for Human Development, Inc. Goodwill Industries of San Antonio Hampton University Career Advancement Resiliency Total Action Against Poverty, Inc. AK, MO, OH, and OR (continued) Nine Star Enterprises, Inc. City of Long Beach Department of Community Development Goodwill Industries of Southern California Catholic Charities of Los Angeles San Diego Workforce Partnership, Inc. County of Tulare Private Industry Council, Inc. United Cerebral Palsy of Colorado City and County of Denver Community Action Agency of New Haven, Inc. The Access Agency, Inc. Washington, D.C. Goodwill Industries of North Florida, Inc. Latin Chamber of Commerce of USA DeKalb Economic Opportunity Authority, Inc. Hawaii County Economic Opportunity Council Community and Economic Development Association of Cook County, Inc. City of Gary, Department of Health and Human Services (continued) Labor Institute for Workforce Development The Baltimore City Office of Employment Development Prince Georges Private Industry Council Boston Technology Venture Center, Inc. Action for Boston Community Development Inc. Advent Enterprises, Inc. Full Employment Council, Inc. S & K Holding Company, Inc. Southwestern Community Services, Inc. Mercer County Office of Training and Employment Santa Fe SER/Jobs for Progress, Inc. New York City Partnership and Chamber of Commerce City of New York Human Resources Administration Wildcat Service Corporation New York Buffalo and Erie Private Industry Council Private Industry Council of Columbus and Franklin County, Inc. (continued) Eastern Workforce Development Board, Inc. District 1199C Training and Upgrading Fund of the National Union of Hospital and Healthcare Employers Centro de Capacitacion y Asesoramiento Nashville/Davidson County Private Industry Council Dallas County Local Workforce Development Board Tarrant County Workforce Development Board Five County Association of Governments Central Vermont Community Action Council Alexandria Redevelopment and Housing Authority Washington State Labor Council (AFL-CIO) MA, MN, NJ, and PA (continued) In Arizona, the Department of Economic Security is the welfare-to-work federal grant recipient and state administering entity. Arizona submitted its welfare-to-work plan on August 5, 1998. On August 20, 1998, Labor awarded fiscal year 1998 formula grant funds to the state totaling $9,000,000. Although Arizona was eligible for about $17,418,000 in federal welfare-to-work funds, the state did not identify matching funds sufficient to receive its maximum federal allocation. Instead, Arizona assured $4,500,000 in state matching funds over the 3-year grant period. According to a state official, the state match appropriated by the state legislature was $1.5 million for 1998; state officials anticipate the legislature will appropriate the remaining $3 million in 1999. Arizona required its 16 Private Industry Councils to amend their JTPA plans with descriptions of how formula grant funds would be expended and to submit these amended plans for state review and approval, rather than submitting formal welfare-to-work plans. According to a state official, local plans were reviewed in November 1998, and the Private Industry Councils planned to implement their welfare-to-work formula grant programs between November 1998 and January 1999. The Arizona state plan outlined the full range of federally allowable welfare-to-work activities and targeting strategies from which the local service delivery areas may specify the target population and mix of services most appropriate for their local needs. According to the state plan, service delivery areas will determine the target group(s) to be served, and potential welfare-to-work clients may be directly referred to the service delivery areas by the state welfare recipient employment and training program, the Division of Child Support Enforcement, or the superior court through court order. A local official said that since the approval of the state plan, the Arizona Department of Economic Security has urged service delivery areas to recruit participants through direct referrals from the state welfare service system’s employment and training program, rather than design their own recruiting programs. The Arizona state plan provided local areas with guidance on the provision of local activities and services. Specifically, the plan outlined four categories of job readiness, each of which includes a specific mix of services based on the participant’s characteristics: Not Ready, Almost Ready, Ready, and Post Placement. However, local service delivery areas may determine the target population and mix of services most appropriate for their area’s needs. Arizona allocated all of the $9,000,000 federal formula grant to the local service delivery areas using the following formula: 50-percent weight was given to the number of people under poverty in excess of 7.5 percent of the service delivery area population, and 50-percent weight was given to the number of welfare recipients in the service delivery area having received assistance for at least 30 months. Three of the service delivery areas—Apache, Graham, and Greenlee Counties—received no federal funds because their formula allocations of the $9 million federal grant fell below the required minimum of $100,000; however, as shown in table V.1, Arizona allocated state matching funds to each of these service delivery areas. Arizona planned to use 15 percent of the state match for state welfare-to-work administration, and the balance of the state match was allocated to the service delivery areas using the same formula applied to the federal funds. Local service delivery areas must limit welfare-to-work administrative costs to 15 percent of their formula grant award. Although Arizona assured $4.5 million in matching funds for the full $9 million federal welfare-to-work award, the state legislature appropriated $1.5 million of the match during 1998. Arizona has an official state document, referenced in the federal grant agreement, that controls the disbursement of funds according to the amount of state match provided. According to a state official, until additional matching funds are appropriated, service delivery areas are only entitled to their allocations of the $3 million in federal funds that have been matched with $1.5 million in state funds. Allocations based on the current and full state match are included in table V.1. Arizona allocated 100 percent of the federal formula grant funds to the local service delivery areas. The state retained none of the allowable 15 percent governor’s discretionary funds ($1,350,000) at the state level. Of the participants enrolled in welfare-to-work programs, the state planned to place 56 percent of participants in unsubsidized jobs; of those placed, the goal is that 56 percent will still be working after 6 months and have a 1-percent increase in earnings over this time. In California, the Employment Development Department is the welfare-to-work federal grant recipient and state administering entity. California submitted its welfare-to-work plan to Labor on June 30, 1998. On July 20, 1998, Labor awarded fiscal year 1998 formula grant funds to the state totaling $190,417,247. The state assured $95,208,624 in state matching funds over the 3-year grant period. According to a state official, the state match was appropriated by the legislature, and $10 million was budgeted for 1998. This state match was appropriated to the California Department of Social Services, to be allocated among the state’s county welfare departments for welfare-to-work activities. The welfare departments, in collaboration with service delivery areas, locally elected officials, and other local stakeholders, will determine how to use the state matching funds to meet the welfare-to-work needs of their communities. California’s 52 local service delivery areas were required to submit welfare-to-work plans for state review and approval. California believed it was important for local areas to exhibit a sense of program direction before receiving welfare-to-work funding and wanted to ensure that workforce development agencies had coordinated their proposed welfare-to-work activities with the state’s 58 county welfare departments. The state legislature passed a law allowing the local areas to prepare joint plans; consequently, there are a total of 41 local plans. For example, the eight local service delivery areas in Los Angeles County prepared one plan for the entire county. According to a state official, as of September 30, 1998, 22 individuals were enrolled in welfare-to-work formula grant programs statewide. In California, the local service delivery areas are responsible for developing welfare-to-work programs to meet their communities’ demographic and workforce needs. California’s state plan noted that given the diversity of the state’s local service delivery areas, no one service strategy could be effectively applied statewide. A state official explained that urban areas with many employment opportunities may choose to focus heavily on work experiences in the private sector. On the other hand, rural areas, with fewer employers, may rely heavily on community service work experiences in their welfare-to-work programs. California allocated 85 percent, or $161,854,660, of the federal formula grant to the local service delivery areas using the following formula: 55-percent weight was given to the number of people with incomes below the poverty level in excess of 7.5 percent of the service delivery area population; 15-percent weight was given to the number of unemployed people in the service delivery area; and 30-percent weight was given to the number of adults receiving welfare for at least 30 months in the service delivery area. This formula was developed to ensure that all local areas would receive the $100,000 federally required minimum allocation. California limited local service delivery areas to an administrative cost cap of 13 percent. The governor’s welfare-to-work discretionary funds, 15 percent of the formula funds, totaled $28,562,587. With $23 million of these funds, as shown in table VI.2, the state funded 24 projects throughout the state that were selected on a competitive basis. The state required that the proposed use of these discretionary funds be coordinated with local workforce preparation and welfare reform partners, and applicants were encouraged to develop linkages with businesses, economic development practitioners, and supportive service agencies. Consequently, the grantees will use the funds in conjunction with other local resources to support a mix of the federally allowable welfare-to-work employment activities and services as determined by the local community. Asian American Drug Abuse Program, Inc. Community Career Development, Inc. Contra Costa County Social Service Department Martinez El Dorado County Department of Social Services Fresno County Economic Opportunities Commission Goodwill Industries of Southern California Housing Authority of San Bernardino County Human Resources Agency of Santa Cruz County Joint Efforts, Inc. Labor’s Community Services Agency Learning Center of Tehama County Mendocino County Social Services Department Ukiah North Santa Clara Valley Job Training Consortium Pacific Asian Consortium in Employment Rubicon Programs, Inc. Sacramento County Department of Human Assistance San Joaquin County Private Industry Council South Bay Center for Counseling Vietnamese Community of Orange County Youth Employment Partnership, Inc. An additional $1.5 million of the governor’s discretionary funds was awarded through a competitive process to six regional collaboratives to promote and encourage education and leadership through a cooperative process. The six award recipients included Humboldt County, Ventura County, San Joaquin County, East Bay Works, Los Angeles County Collaborative, and the Inland Empire. The remaining $4,062,587 in governor’s welfare-to-work discretionary funds will be used by the state for welfare-to-work administration. Recognizing that local performance goals may differ somewhat from those in the state plan, California set three performance goals for the welfare-to-work program as benchmarks to assist the state in providing technical assistance to local areas. California’s initial formula grant program performance goals for the first year include (1) a placement rate, (2) a follow-up employment rate, and (3) a follow-up increase in earnings goal. In 1997, California had an average caseload of about 830,000, some of whom will be provided assistance under welfare-to-work. The state goals for welfare-to-work are to place a minimum of 45 percent of welfare-to-work program participants in unsubsidized employment; of those placed, a minimum of 70 percent should be employed 6 months after placement, and their average weekly wage at a 6-month follow-up should increase by 10 percent over the average weekly wage at placement. The state required that local plans describe local performance goals for placements, job retention, and increased earnings. In Massachusetts, the Department of Labor and Workforce Development is the welfare-to-work federal grant recipient and its quasi-public subentity, the Corporation for Business, Work and Learning, is the state welfare-to-work administering entity. Massachusetts submitted its welfare-to-work plan on January 7, 1998. On February 25, 1998, Labor awarded fiscal year 1998 formula grant funds to the state totaling $20,692,295. The state assured $10,346,148 in state matching funds over the 3-year grant period, specifically assuring $5 million for 1998. According to a state official, this match is from funds previously appropriated by the state legislature for adult basic education and child care programs; the matching funds will be used to serve welfare-to-work-eligible participants through these programs. In Massachusetts, each of the 16 Regional Employment Boards was required to submit a “preplan” proposing local welfare-to-work strategies, and these plans were incorporated into the state welfare-to-work plan. Once Labor awarded the formula grant, the state required the Regional Employment Boards to submit final plans containing additional details such as local performance goals. According to a state official, the state had approved all of the local plans by April 1998 and, as of September 30, 1998, 434 individuals were enrolled in welfare-to-work formula grant programs statewide out of the target population of 7,000 likely to lose cash benefits by December 1, 1998. The state planned to use welfare-to-work funds to target and assist welfare recipients facing the most significant barriers to employment. The state’s welfare-to-work program focused on serving welfare recipients nearing the state-imposed 24-month deadline for cash assistance. A state official said that about 7,000 welfare recipients in Massachusetts were expected to lose cash assistance benefits as of December 1, 1998. Within the state focus, the local service delivery areas may further specify the target population and choose the mix of services most appropriate for their area’s needs. The state plans to spend at least 70 percent of formula funds on the hardest-to-employ long-term welfare recipients as required by law and up to 30 percent of the grant funds on individuals with characteristics of long-term welfare recipients. According to a state official, Massachusetts’ welfare-to-work program staff are finding it easier to initially enroll all participants under the 30-percent expenditure category (long-term welfare recipient). In order to enroll participants under the 70-percent expenditure category (those determined to be the hardest to employ), additional testing is necessary to verify eligibility characteristics. Although state and local officials are confident that there are enough people with the necessary characteristics to satisfy the 70-percent requirement, they note that the additional assessment needed for their eligibility determination is expensive and time consuming. Massachusetts allocated 85 percent of the federal formula grant, or $17,588,452, to the local service delivery areas using the following formula: 50-percent weight was given to the number of people with incomes below the poverty level in excess of 7.5 percent of the service delivery area population; 10-percent weight was given to the number of unemployed people in the service delivery area; and 40-percent weight was given to the number of long-term welfare recipients in the service delivery area having received assistance for at least 30 months. By the substate formula, all of the areas received more than the required $100,000 minimum; however, the state decided to allot a minimum of $400,000 to each local area. Consequently, as shown in table VII.1, $524,808 of the governor’s welfare-to-work discretionary funds were used to increase the allocations for five local service delivery areas to this level. According to a state official, the Regional Employment Boards may use no more than 12.23 percent of their grants for welfare-to-work administrative purposes. N/A = not applicable. The governor’s welfare-to-work discretionary funds, 15 percent of the formula funds, totaled $3,103,843. The state planned to use these funds for the following purposes: $524,808 to subsidize the five service delivery areas allocated the lowest amount of welfare-to-work formula funding; $165,000 to the Corporation for Business, Work and Learning to provide an information system technology upgrade capable of handling interagency welfare-to-work data; $718,562 for state welfare-to-work administration; and $1,695,473 for the Department of Transitional Assistance to supplement its program of assessment and structured employment assistance. Although specific, numeric performance goals were not included in the state plan, the state proposed to serve 3,979 welfare-to-work participants and will measure placement in private sector employment, placement in any employment, the duration of placement, and increases in earnings. The state required each local service delivery area to specify performance goals based on these measurements. In Michigan, the Michigan Jobs Commission is the welfare-to-work federal grant recipient and state administering entity. Michigan submitted its welfare-to-work plan on December 11, 1997. On January 29, 1998, Labor awarded fiscal year 1998 formula grant funds to the state totaling $42,226,331. The state assured $21,113,166 in matching funds over the 3-year grant period. According to a state official, this match was appropriated by the state legislature, and $10 million was appropriated through September 30, 1998. Michigan required its 25 local service delivery areas to submit two local plans for state review and approval: one for the federal formula grant funds and another for state matching funds appropriated by the state legislature. According to a state official, all of the local plans were approved by September 4, 1998, and, as of September 30, 1998, about 340 individuals were enrolled in welfare-to-work formula grant programs statewide. The state planned to use welfare-to-work funds primarily to serve noncustodial parents. On July 1, 1998, Michigan instituted a statewide noncustodial parent program and, depending on their eligibility, these parents may be served with welfare-to-work funds. To increase child support payments, Michigan’s state plan emphasized serving unemployed noncustodial parents who have child support payments in arrears and whose dependents are receiving TANF assistance; the failure of noncustodial parents to participate in the welfare-to-work program without good cause could lead to their incarceration. Local service delivery areas must devote 50 percent of their welfare-to-work grant funds to assist this population. Michigan distributed its governor’s discretionary formula funds to the local areas, providing them with more funding to meet their noncustodial parent expenditure goal. Furthermore, the courts will identify and refer eligible participants to welfare-to-work programs. Within this state focus on noncustodial parents, the local service delivery areas designed their own strategies for the use of welfare-to-work funds, including services to the hardest-to-employ TANF clients referred to welfare-to-work programs by the state welfare agency. For all welfare-to-work participants, the state plan emphasized vigorous case management during the first 90 days of employment to ensure employment retention. Michigan allocated all of the $42,226,331 federal formula grant to the local service delivery areas using the following formula: 50-percent weight was given to the number of people with incomes below the poverty level in excess of 7.5 percent of the service delivery area population, and 50-percent weight was given to the number of welfare recipients in the service delivery area who had received assistance for at least 30 months. Along with the federal formula funds, Michigan obligated $19,212,981 in state matching funds to the local service delivery areas, for a total of $61,439,312, using the same formula applied to the federal funds. Although two local service delivery areas were allocated less than $100,000 by formula, as shown in table VIII.1, the state provided them with their formula allocations of both federal and state welfare-to-work funds. Michigan planned to use 9 percent ($1,900,185) of the state matching funds for welfare-to-work administration; service delivery areas may spend up to 15 percent of their allocations on welfare-to-work administration. Eastern U.P. Western U.P. Michigan allocated 100 percent of the federal formula grant funds to the local service delivery areas. The state retained none of the allowable 15 percent governor’s discretionary funds ($6,333,950) at the state level. Performance goals were not included in the state plan, but Michigan planned to measure duration of placement into unsubsidized employment, increased child support collection, and earnings, measured after 90 days of employment and other times throughout the year. In New York, the Department of Labor is the welfare-to-work federal grant recipient and state administering entity. New York submitted its welfare-to-work plan on June 29, 1998. On September 11, 1998, Labor awarded fiscal year 1998 formula grant funds to the state totaling $96,886,094. The state assured $48,443,047 in state matching funds over the 3-year grant period. According to a state official, the state provided half of the state match through a legislative appropriation and required local service delivery areas to provide the remaining half of the state match through in-kind or cash contributions. New York requested that an alternate agency be designated to administer the welfare-to-work program in 2 of its 33 service delivery areas. The Secretary of Labor granted waivers for these two areas, and the welfare-to-work program is administered by the human services agencies for New York City and the Syracuse/Onondaga area. The state required these two human services agencies and the remaining 31 Private Industry Councils to submit plans proposing local welfare-to-work strategies and incorporated these plans in the New York State welfare-to-work plan. As of September 30, 1998, a state official said that most of the local areas were designing their welfare-to-work eligibility determination processes in conjunction with the local social services departments, but none of the local areas had reported enrollments of welfare-to-work participants in the formula grant program. New York’s state plan for formula funds proposed a general focus on improving the connection to work, providing postemployment assistance, and serving the needs and requirements of employers. The state plan emphasized serving individuals with disabilities, many of whom have experienced long-term welfare dependency but are no longer exempt from work requirements. Within these state initiatives, the local service delivery areas may further specify the target population and choose the mix of services most appropriate for their area needs. New York allocated 85 percent, or $82,353,180, of the federal formula grant to the local service delivery areas using the following formula: 50-percent weight was given to the number of people with incomes below the poverty level in excess of 7.5 percent of the service delivery area population, 25-percent weight was given to the number of long-term welfare recipients in the service delivery area having received assistance for at least 30 months, and 25-percent weight was given to the number of unemployed people in the service delivery area. By using this formula, as shown in table IX.1, all of the service delivery areas in New York qualified for more than $100,000 in formula funds. Local areas can use up to 15 percent of their funding for administrative costs. Total funds available for welfare-to-work (continued) Numbers may not add because of rounding. New York planned to use the governor’s discretionary funds, 15 percent of the formula funds or $14,532,914, for several purposes, awarding grants to a variety of organizations—some to supplement allocations to service delivery areas, others to independent organizations. The largest amount of funding, $8.5 million, was allocated to a multiagency effort called the New York Works Employment Retention and Advancement program for innovative projects to serve “work-limited” individuals, such as people with mental illness, substance abusers, and people with disabilities. Through this program, the state will fund, on a competitive selection basis, as many projects as possible, with awards ranging from a $50,000 minimum to an $850,000 maximum. These projects will provide specific services to move clients into employment and provide postemployment services to help working participants keep their jobs and increase their earnings. New York also planned to use $2,229,000 of the governor’s funds for grants to two local service delivery areas. Although all of the local service delivery areas in the state had the opportunity to obtain additional welfare-to-work moneys from the governor’s discretionary funds, only two applied. New York City and Sullivan County, the service delivery areas with the highest and lowest formula grants, received $2,168,000 and $61,000, respectively. New York’s client information campaign received $3,271,000 of the funds for projects designed to help clients make informed employment choices while transitioning off welfare. These projects include an update of the state’s resource guide, a faith-based initiative, a CD-ROM, teleconferences, and an agreement with the state’s Department of Transportation print shop for printed materials. Finally, about $500,000 was designated for the Office of Alcohol and Substance Abuse Services to provide services for welfare-to-work-eligible substance abusers. The state planned to place 38 percent of the those who receive assistance through the welfare-to-work grant program in unsubsidized jobs; furthermore, the state planned that of those placed, 46 percent are to continue to be employed after 6 months and to have an increase in earnings of $214 over this time period. In Wisconsin, the Department of Workforce Development is the welfare-to-work federal grant recipient and state administering entity. Wisconsin submitted its welfare-to-work plan on April 13, 1998. On June 15, 1998, Labor awarded fiscal year 1998 formula grant funds to the state totaling $12,885,951. Collectively, the state and local service delivery areas assured $6,442,976 in state matching funds over the 3-year grant period. According to a state official, local service delivery areas were required to match their federal allocations, and recipients of the governor’s discretionary funds matched their allocations. Wisconsin required its 11 local administrative entities to submit local welfare-to-work plans for state review and approval. One local service delivery area chose not to submit a plan. According to a state official, as of September 30, 1998, no welfare-to-work participants had been enrolled statewide in formula grant programs. Wisconsin planned to target noncustodial parents with its formula grant funds and, because its TANF caseload is low, the state also proposed to assist individuals receiving TANF child care subsidies rather than cash assistance. A state official explained that for working families, child care subsidies are considered TANF payments, making recipients eligible for welfare-to-work as long-term TANF recipients. Within the state’s focus, the local service delivery areas may further specify the target population and choose the mix of services most appropriate for their area’s needs. 50-percent weight was given to the number of long-term welfare recipients in the service delivery area having received assistance for at least 30 months. Since one service delivery area could not obtain matching funds for its allocation of $174,741 and chose not to participate in the welfare-to-work program, Wisconsin deducted this amount from the total substate funds available and allocated $10,778,317 of the federal funds among the remaining delivery areas, as shown in table X.1. Two local service delivery areas, Waukesha-Ozaukee-Washington and Marathon County, qualified for less than the $100,000 federally required minimum; consequently, their allocations reverted to the governor’s discretionary funds before being issued to the two local areas. Local service delivery areas may use up to 15 percent of their welfare-to-work funds for administrative costs. Fox Valley (Northern Lake Winnebago and Winne-Fond-Lake) South Central (Dane County and South Central) Bay Area (Northeastern and Lake Michigan) Southwest (Southwest and Rock County) United Migrant Opportunity Services for projects serving migrants and seasonal farmworkers in rural areas; $180,000 to be allocated among 8 projects serving Southeast Asian immigrants; $100,000 to the state’s Division of Economic Support to modify its data support system; $100,000 to the Division of Workforce Excellence for welfare-to-work administration; and $189,934 to the Division of Economic Support to hire research analysts. Additionally, the allocations for the two local service delivery areas that received under $100,000 were temporarily added to the governor’s discretionary funds and were reallocated to the two local areas. A state official said that Wisconsin did not include specific, numeric performance goals in its state plan, but the service delivery areas have local goals that are similar to their JTPA performance measures. Wisconsin had approximately 30,000 TANF recipients in 1997, some of whom will receive assistance from welfare-to-work. Of those who receive assistance from welfare-to-work, the goal is for a significant percentage to obtain unsubsidized employment, ranging from 40 percent in Milwaukee to 80 percent in other areas of the state; of those placed in unsubsidized employment, duration goals range from 40 percent of participants remaining employed after 3 months in Milwaukee to 70 percent remaining employed after 12 months in areas with lower unemployment; and for wage increases, the goal is that participants will experience a countable increase in earnings, such as the goal of a 40-percent wage increase over previous wage levels in Milwaukee, with starting wages as high as $7.75 an hour. Welfare Reform: Implementing DOT’s Access to Jobs Program (GAO/RCED-99-36, Dec. 8, 1998). Welfare Reform: Early Fiscal Effects of the TANF Block Grant (GAO/AIMD-98-137, Aug. 18, 1998). Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Welfare Reform: Transportation’s Role in Moving From Welfare to Work (GAO/RCED-98-161, May 29, 1998). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information about: (1) welfare-to-work formula and competitive grants awarded to, or declined by, states for fiscal year (FY) 1998; (2) how selected grantees are planning to use these funds; and (3) how selected grantees plan to meet welfare-to-work requirements to better integrate the states' workforce development services with other human services for welfare recipients. GAO noted that: (1) The Department of Labor (DOL) awarded formula grants to 44 states plus the District of Columbia, Guam, Puerto Rico, and the Virgin Islands with welfare-to-work funding available for FY 1998, and, as of November 20, 1998, it had awarded competitive grants to 126 organizations with combined welfare-to-work funding available for fiscal years 1998 and 1999; (2) six states--Idaho, Mississippi, Ohio, South Dakota, Utah, and Wyoming--did not participate in the welfare-to-work formula grant program; (3) these states, which would have received a total of about $71 million, chose not to participate for various reasons, including concerns about their ability to provide state matching funds; (4) Arizona was the only state that applied for formula grant funds but did not pledge sufficient matching funds to receive its maximum federal allocation; (5) the competitive grant funds Labor awarded represented all welfare-to-work funds available for FY 1998 and about a third of the FY 1999 funds; (6) most states had at least one local service organization that received competitive grant funds; (7) three of the six states GAO reviewed--Massachusetts, Michigan, and Wisconsin--outlined very specific uses for formula funds, while plans for the other three states--Arizona, California, and New York--indicated that the use of these funds would be determined by the local service delivery areas; (8) Michigan's and Wisconsin's plans emphasized assistance to unemployed noncustodial parents--these parents, mostly fathers, often have child support payments in arrears and dependents who are receiving welfare cash assistance; (9) Massachusetts focused on serving Temporary Assistance for Needy Families recipients who are reaching their time limits on cash assistance; (10) in contrast, California's plan did not emphasize a specific welfare-to-work service strategy because state officials believed that no one service strategy could be applied effectively throughout the state; (11) similarly, Arizona and New York allowed local service delivery areas to decide on strategies for using formula grant funds; (12) state and local officials in the six states GAO reviewed noted that a stronger partnership was developing between the workforce development agencies and other human service agencies assisting welfare recipients, in part because of their joint involvement in the welfare-to-work planning process; and (13) the welfare-to-work competitive grantees also coordinated their plans with state and local officials. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Other transaction authority was created to enhance the federal government’s ability to acquire cutting-edge science and technology by attracting nontraditional contractors that have not typically pursued government contracts. Other transactions are agreements other than government contracts, grants, or cooperative agreements and may take a number of forms. These agreements are generally not subject to the FAR. This authority originated in 1958 when Congress gave the National Aeronautics and Space Administration (NASA) the authority to enter into contracts, leases, cooperative agreements, or “other transactions.” In 1989, Congress granted the Defense Advanced Research Projects Agency (DARPA) temporary authority to use other transactions for advanced research projects. In 1991, Congress made this authority permanent and extended it to the military services. In 1993, Congress temporarily expanded DARPA’s other transaction authority, allowing the agency to use the agreements for prototype projects. The Homeland Security Act of 2002 created DHS and granted the agency the authority to enter into other transactions for research and development and prototype projects for a period of 5 years. Congress granted DHS this authority to attract nontraditional firms that have not worked with the federal government, such as high-tech commercial firms that have resisted doing business with the government because of the requirements mandated by the laws and regulations that apply to traditional FAR contracts. The Consolidated Appropriations Act for 2008 extended this authority until September 30, 2008. DHS began operations in March 2003 incorporating 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. Since then, DHS has become the third largest agency for procurement spending in the U.S. government. DHS’s acquisition needs range from basic services to complex investments, such as sophisticated screening equipment for air passenger security and upgrading the Coast Guard’s offshore fleet of surface and air assets. In fiscal year 2006, according to agency data, the department obligated $15.9 billion for goods and services to support its broad and complex acquisition portfolio. DHS’s S&T Directorate supports the department’s mission by serving as its primary research and development arm. In fiscal year 2006, according to S&T data, S&T obligated over $1.16 billion dollars to fund and develop technology in support of homeland security missions. The directorate has funded technology research and development in part through the use of other transaction authority. According to agency officials, S&T is the only component within DHS that uses this authority. Because of their flexibility, other transactions give DHS considerable latitude in negotiating with contractors on issues such as intellectual property, reporting on cost, and data rights. In addition, it may relieve the parties from certain contract administration requirements that nontraditional contractors find burdensome. The number and value of DHS’s other transaction agreements has decreased since 2005. Its recent other transaction agreements represent just a small portion of its total procurement spending. Most of the department’s use of other transaction authority to date occurred between fiscal years 2004 and 2005. Though it has since used this authority less frequently, it continues to obligate funds for its earliest agreements. About 77 percent of the $443 million spent on DHS’s agreements has been on 7 of the 37 agreements. S&T contracting representatives reported that all of these agreements were for prototype projects. In fiscal year 2006, other transactions accounted for almost $153 million of DHS’s reported $15.9 billion in procurement obligations, approximately 1 percent (see fig. 1). In addition, other transactions represent only a small portion of S&T spending. For example, the department estimates that from fiscal years 2004 through 2007, S&T spent 13 percent of its total obligations on its other transaction agreements. DHS reported a total of 37 other transaction agreements, 30 of which were entered into in fiscal years 2004 and 2005. Accordingly, 88 percent of total spending was for agreements reached in fiscal years 2004 and 2005 (see fig. 2). While the total number of new agreements has decreased since 2005, the total obligations under these agreements have generally increased because funds are obligated for agreements made in prior years (see fig. 3). About 77 percent of obligations was for the seven largest other transaction agreements (see appendix I). According to S&T, all of these agreements included at least one nontraditional contractor, most commonly as a subcontractor. Though the acquisition outcomes related to DHS’s use of other transaction authority have not been formally assessed, the department estimates that at least some of these agreements have resulted in time and cost savings. According to an S&T contracting representative, all of its current agreements are for development of prototypes, but none of the projects have yet reached production. Therefore, it is too soon to evaluate the results. However, the department believes that some of these agreements have reduced the time it takes to develop its current programs, as compared to a traditional FAR-based contract. In addition, DHS has stated that its two cost-sharing agreements for development of its Counter- MANPADS technology have resulted in savings of over $27 million and possibly more. However, the extent to which these savings accrue to the government or to the contractor is unclear. Soon after DHS established the S&T Directorate, S&T issued other transaction solicitations using some commonly accepted acquisition practices and knowledge-based acquisition principles. For example, DHS used integrated product teams and contractor payable milestone evaluations to manage other transaction agreements. To quickly implement its early projects, S&T relied on experienced staff from DARPA, other government agencies, and industry to help train S&T program and contracting staff in using other transactions and help DHS create and manage the acquisition process. S&T also brought in program managers, scientists, and experts from other government agencies on a temporary basis to provide assistance in other areas. Beyond these efforts, GAO found some areas for improvement and recommended that: DHS provide guidance on when to include audit provisions in agreements; provide more training on creating and managing agreements; capture knowledge gained from current agreements for future use; and take measures to help rotational staff avoid conflicts of interest. DHS has implemented some measures to address many of these recommendations; however, it has not addressed all of them. Provide guidance: We recommended that DHS develop guidance on when it is appropriate to include audit provisions in other transaction agreements. Subsequently, DHS modified its management directive to add guidance on including GAO audit provisions in agreements. However, the guidance only addresses prototype agreements over $5 million. While S&T contracting officials recently told us that they have only issued other transaction agreements for prototypes, they noted that the department intends to issue agreements for research projects in the future. In addition, it is unclear how the $5 million threshold is to be applied. In at least one agreement, the audit provision did not apply to subcontractors unless their work also exceeded the $5 million threshold. Provide additional training: We recommended that DHS develop a training program for staff on the use of other transactions. DHS has developed a training program on other transactions, and S&T contracting representatives said they have plans to conduct additional sessions in 2008. The training includes topics such as intellectual property rights, acquisition of property in other transactions, and foreign access to technology created under other transaction authority. An S&T contracting representative told us the Directorate currently has three staff with other transaction warrants and has additional in-house expertise to draw on as needed, and they said S&T no longer needs to rely on other agencies for contracting assistance. Capture lessons learned: We recommended that DHS capture knowledge obtained during the acquisition process for use in planning and implementing future other transaction projects. In 2005, DHS hired a consultant to develop a “lessons learned” document based on DOD’s experience using other transactions. This is included in DHS’s other transaction training. However, it was not evident based on our follow- up work that DHS has developed a system for capturing knowledge from its own experience regarding other transaction agreements the directorate has executed since it was created. Ethics: We made a number of recommendations regarding conflicts of interest and ethics within S&T. When the S&T Directorate was established in 2003, it hired scientists, engineers, and experts from federal laboratories, universities, and elsewhere in the federal government for a limited time under the Intergovernmental Personnel Act (IPA) with the understanding that these staff would eventually return to their “home” institution. This created potential conflicts of interest for those staff responsible for managing S&T portfolios as these staff could be put in a position to make decisions on their “home” institutions. We recommended that DHS help the portfolio managers assigned through IPA comply with conflict of interest laws by improving the S&T Directorate’s management controls related to ethics. DHS has complied with these recommendations to define and standardize the role of these portfolio managers in the research and development process; provide regular ethics training for these portfolio managers; and determine whether conflict of interest waivers are necessary. The only outstanding recommendation concerns establishing a monitoring and oversight program of ethics-related management controls. Furthermore, an S&T official told us the use of rotational portfolio managers has largely been eliminated with the exception of one portfolio manager who is currently serving a two-year term. With federal agencies' increased reliance on contractors to perform mission related functions comes an increased focus on the need to manage acquisitions in an efficient, effective, and accountable manner. The acquisition function is one area GAO has identified as vulnerable to fraud, waste, abuse, and mismanagement. An unintended consequence of the flexibility provided by other transaction authority is the potential loss of accountability and transparency. Accordingly, management controls are needed to ensure intended acquisition outcomes are achieved while minimizing operational challenges. Operational challenges to successfully making use of other transaction authority include: attracting and ensuring the use of non-traditional contractors; acquiring intellectual property rights; financial control; and maintaining a skilled acquisition workforce. Nontraditional Contractors: One of the goals of using other transactions is to attract firms that traditionally have not worked with the federal government. S&T contracting officials confirmed that at least one nontraditional contractor participated in each other transaction agreement, generally as a partner to a traditional contractor. We have not assessed the extent of the involvement of nontraditional contractors or what portion of the funding they receive. However, we have reported in the past that DOD had a mixed record in attracting nontraditional contractors. Intellectual Property Rights: One reason companies have reportedly declined to contract with the government is to protect their intellectual property rights. Alternatively, insufficient intellectual property rights could hinder the government’s ability to adapt developed technology for use outside of the initial scope of the project. Limiting the government’s intellectual property rights may require a trade-off. On the one hand, this may encourage companies to work with the government and apply their own resources to efforts that advance the government’s interests. However, it also could limit the government’s production options for items that incorporate technology created under an other transaction agreement. For example, we previously reported that DARPA received an unsolicited proposal from a small commercial firm to develop and demonstrate an unmanned aerial vehicle capable of vertical take-off and landing based on the company’s existing proprietary technology. DARPA agreed not to accept any technical data in the $16.7 million agreement. To obtain government purpose rights, DOD would have to purchase 300 vehicles or pay an additional $20 million to $45 million. Therefore, using an other transaction agreement could potentially limit competition and lead to additional costs for follow-on work. Financial Controls and Cost Accounting: Other transactions are exempt from CAS. While other transaction recipients have flexibility in tracking costs, they still need to provide cost information and demonstrate that government funds are used responsibly. This is particularly true for traditional contractors that are performing work under both FAR-based contracts as well as other transaction agreements. For example, contractors may use in-kind donations to satisfy cost-sharing requirements; therefore, it is important that DHS has a means to ensure that companies do not satisfy their other transaction cost-sharing requirements with work funded under a FAR-based contract. Maintaining a Skilled Acquisition Workforce: Other transactions do not have a standard structure based on regulatory guidelines and therefore can be challenging to create and administer. Prior GAO work has noted the importance of maintaining institutional knowledge sufficient to maintain government control. The unique nature of other transaction agreements means that federal government acquisition staff working with these agreements should have experience in planning and conducting research and development acquisitions, strong business acumen, and sound judgment to enable them to operate in a relatively unstructured business environment. Retaining a skilled acquisition workforce has been a continual challenge at DHS, and we have ongoing work in this area for this Committee. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information regarding this testimony, please contact John Needham at (202) 512-4841 or ([email protected]). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this product. Staff making key contributions to this statement were Amelia Shachoy, Assistant Director; Brandon Booth; Justin Jaynes; Tony Wysocki; Karen Sloan; Laura Holliday; and John Krump. American Airlines and ABX Air Inc. U.S. Genomics, Inc. National Institute for Hometown Security, Inc. Science Applications International Corp (SAIC) Genomic HealthCare (GHC) 15.4 This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Other transaction authority was created to enhance the federal government's ability to acquire cutting-edge science and technology by attracting nontraditional contractors that have not typically pursued government contracts. The Homeland Security Act of 2002 granted the department the temporary authority to enter into other transactions for research and prototype projects for a period of 5 years. The Consolidated Appropriations Act of 2008 extended this authority until September 30, 2008. This testimony discusses (1) the extent to which DHS has used its other transaction authority, (2) the status of DHS's implementation of GAO's previous recommendations, and (3) the accountability challenges associated with the use of these agreements. DHS entered into 37 other transaction agreements between fiscal years 2004 and 2007, most of which were entered into in the first 2 years. Though it has since used this authority less frequently, it continues to obligate funds for its earliest agreements. Furthermore, about 77 percent of the dollars spent on these agreements have been for 7 of DHS's 37 agreements. Contracting representatives also told us that all of the agreements to date were for prototype projects and that each agreement included at least one nontraditional contractor. GAO plans further review of DHS's use of other transaction agreements as required by the Homeland Security Act of 2002. DHS has made efforts to improve its use of other transaction agreements and to prevent conflicts of interest. The department has taken the following steps to address prior GAO recommendations including: (1) creating guidance on when to include audit provisions in other transaction agreements; (2)creating a training program on using these agreements; (3) and improving controls over conflicts of interest. GAO also recommended that DHS capture knowledge gained from the agreements it has entered into. The department has compiled lessons learned from the Department of Defense, but the document is not related to DHS's experience. Furthermore, while DHS created guidance on when to include audit provisions in agreements, its guidance only applies to certain prototype projects and only in certain circumstances. Risks inherent with the use of other transaction agreements create several accountability challenges. These challenges include attracting and ensuring the use of nontraditional contractors, acquiring intellectual property rights, ensuring financial control, and maintaining a skilled acquisition workforce with the expertise to create and maintain these agreements. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The goal of SNAP, formerly known as the federal Food Stamp Program, is to help low-income individuals and households obtain a more nutritious diet and help alleviate their hunger. It does so by supplementing their income with benefits to purchase allowable food items. The federal government pays the full cost of the benefits and shares the responsibility and costs of administering the program with the states. Specifically, FNS is responsible for promulgating program regulations and ensuring that states comply with these regulations by issuing guidance and monitoring their state activity. FNS headquarters officials are assisted in this oversight work by federal officials in seven regional offices. FNS also determines which retailers are eligible to accept SNAP benefits in exchange for food and investigates and resolves cases of retailer fraud. State officials, on the other hand, are responsible for determining the eligibility of individuals and households, calculating the amount of their monthly benefits and issuing such benefits on an electronic benefit transfer (EBT) card in accordance with program rules. States are also responsible for investigating possible violations by benefit recipients and pursuing and acting on those violations that are deemed intentional. Trafficking is an intentional program violation that includes acts of fraud, such as making false or misleading statements in order to obtain benefits and trafficking (i.e., using benefits in unallowable ways, such as by exchanging benefits for cash or non-food goods and services or attempting to do so). For example, recipients can traffic benefits by selling EBT cards to another person, exchanging the EBT card and the corresponding Personal Identification Number (PIN) for cash or non-food goods or services (e.g., rent or transportation). These sales can occur in person or by posting offers on social media and e-commerce sites. Recipients can then contact state agencies to report the sold EBT cards as lost or stolen and receive new cards which can be used for future trafficking transactions, for example, when the benefits are replenished the next month. According to a September 2012 U.S. Department of Agriculture Office of Inspector General (USDA OIG) report, the magnitude of program abuse due to recipient fraud is unknown because states do not have uniform ways of compiling the data that would provide such information. As a result, the USDA OIG recommended that FNS determine the feasibility of creating a uniform methodology for states to calculate their recipient fraud rate. FNS reported that it took action on this recommendation but ultimately determined that it would be infeasible to implement as it would require legislative authority mandating significant state investment of time and resources in investigating, prosecuting and reporting fraud beyond current requirements. In the selected states we reviewed in 2014, officials told us they were using well-known tools for detecting potential recipient eligibility fraud, such as data matching and referrals obtained through fraud reporting hotlines and websites. Specifically, at that time, all 11 states that we reviewed had fraud hotlines or websites, and all matched information about SNAP applicants and recipients against various data sources to detect those potentially improperly receiving benefits, as FNS recommended or required. (See table 1.) Beyond the required and recommended data matches, at the time of our report, Florida, Texas, Michigan, and one county in North Carolina used specialized searches that checked numerous public and private data sources, including school enrollment, vehicle registration, vital statistics, and credit reports to detect potential fraud prior to providing benefits to potential recipients. Florida officials we interviewed shifted the majority of their anti-fraud resources to more cost-effective and preventive efforts in identifying potential fraud by developing tools geared towards detecting eligibility fraud and improper benefit receipt, such as identification verification software and profiles that case workers could use to identify error-prone applications. These state officials stated that this focus on preventive efforts was key to helping them manage recent constraints on their investigative budgets. To track potential trafficking, officials in the 11 states reported that they analyzed patterns of EBT transactions and monitored replacement card data and online postings pursuant to FNS’s requirements and guidance. (See table 2.) At the time of our 2014 report, most of the selected states reported difficulties in conducting fraud investigations due to either reduced or stagnant staff levels while SNAP recipient numbers greatly increased from fiscal year 2009 through 2013. (See figure 1.) Furthermore, state investigators in all 11 states we reviewed were also responsible for pursuing fraud in other public assistance programs, such as Medicaid, Temporary Assistance for Needy Families and child care and housing assistance programs. However, at the time of our report, some states implemented a strategy to leverage their available investigative resources. Specifically, four of the states we reviewed—Florida, Massachusetts, Michigan and Nebraska— had implemented and two states—Maine and North Carolina—were in the process of implementing state law enforcement bureau (SLEB) agreements. According to FNS officials, the agency was supportive of states’ efforts to establish these agreements between state SNAP agencies and federal, state, and local law enforcement agencies, which would enable state SNAP investigators to cooperate in various ways with local, state, and federal law enforcement agents, including those within the USDA OIG. For example, under these agreements, law enforcement agencies can notify the SNAP fraud unit when they arrest someone who possesses multiple EBT cards, and SNAP agencies can provide “dummy” EBT cards for state and local officers to use in undercover trafficking investigations. Officials in one county in Florida told us at the time of our report that this type of cooperation allowed local police officers to make 100 arrests in its first undercover operation of recipients who were allegedly trafficking SNAP benefits. At the time of our report, some state officials suggested changing the financial incentives structure to help support the costs of investigating potential SNAP fraud because some investigative agencies were not rewarded for cost-effective, anti-fraud efforts that could prevent ineligible people from receiving benefits. According to GAO’s Fraud Prevention Framework, investigations, although costly and resource-intensive, can help deter future fraud and ultimately save money. Officials in one state told us that it would help its anti-fraud efforts if FNS would provide additional financial incentives for states to prevent potential fraud at the time of application beyond what is currently provided for recovered funds. Specifically, when fraud by a recipient is discovered, the state may generally retain 35 percent of the recovered overpayment, but when a state detects potential fraud by an applicant and denies the application, there are no payments to recover. In our 2014 report, we found that, upon testing, FNS’s recommended approaches to detecting online fraud were of limited utility and selected states had limited success with using FNS’s required approach to replacement card monitoring. Specifically, we found that FNS provided states with guidance on installing free web-based software tools for monitoring certain e-commerce and social media websites for online sales of SNAP benefits, but some officials from the selected states reported problems with these detection tools. According to FNS, these tools could automate the searches that states would normally have to perform manually on these websites, which states reported as being cumbersome and difficult given limited resources. Of the 11 states we reviewed, officials from only one reported that the tool worked well for identifying SNAP recipients attempting to sell their SNAP benefits online. At the time of our review, FNS officials acknowledged that there were limitations to the monitoring tools, and stated that they provided these tools at the request of states to help with monitoring efforts. In 2014, we tested these automated detection tools for certain periods of time on selected geographical locations covering our selected states and found them to be of limited effectiveness for states’ fraud detection efforts. For example, our testing of the recommended automated tool for monitoring e-commerce websites found that the tool did not detect most of the postings found through our manual website searches. Specifically, out of 1,180 postings we reviewed manually, we detected 28 postings indicative of potential SNAP trafficking. Twenty-one of these 28 postings were not detected by FNS’s recommended monitoring tool. We also found the automated tool for monitoring social media websites to be impractical for states’ fraud detection efforts, given that, for example, it could not be tailored to a specific location. We concluded that this could have potentially limited a state’s ability to effectively determine whether the postings detected were relevant to the state’s jurisdiction. In 2014, we also reported that FNS required that states examine replacement card data as a potential indicator of trafficking, but state officials we interviewed reported difficulties using the data as a fraud detection tool. In 2014, FNS finalized a rule requiring states to monitor replacement card data and send notices to those SNAP households requesting excessive replacement cards, defined as at least four cards in a 12-month period. Officials we interviewed in the 11 states reviewed reported tracking recipients who make excessive requests for replacement EBT cards, as required by FNS, but said they had not had much success in detecting fraud through that method. Specifically, officials in 4 states reported that they had not initiated any trafficking investigations as a result of the monitoring, officials in 5 states reported low success rates for such investigations, and officials in 1 state reported that they had just started tracking the data. Officials in only 1 state reported some success using the data to detect potential trafficking. Furthermore, officials from 7 of the 11 states we reviewed reported that the current detection approach specified by FNS often led them to people who had made legitimate requests for replacement cards for reasons such as unstable living situations or a misunderstanding of how to use the SNAP EBT card. At the time of our report, FNS was aware of states’ concerns about the effectiveness of this effort, but continued to stress that monitoring these data was worthwhile. We found that while all of the selected states reported analyzing SNAP replacement card data to detect fraud as required by FNS, a more targeted approach to analyzing high-risk replacement card data potentially offered states a way to better use the data as a fraud detection tool. Specifically, we analyzed fiscal year 2012 replacement card data in three selected states—Michigan, Massachusetts, and Nebraska—using an approach aimed at better identifying SNAP households requesting replacement cards that are at higher risk of trafficking benefits. Our approach took into account FNS’s regulation that defined excessive replacement cards as at least four requested in a 12-month period. However, we also considered the monthly benefit period of replacement card requests by focusing on SNAP households receiving replacement cards in four or more unique monthly benefit periods in a year. Based on our analysis, we determined that because SNAP benefits are allotted on a monthly basis, a recipient who is selling the benefits on their EBT card and then requesting a replacement card would generally have only one opportunity per month to do so. Thus, if a SNAP recipient was requesting a replacement card because they had just sold their EBT card and its associated SNAP benefits, it was unlikely that there would be more benefits to sell until the next benefit period. As a result, we determined that additional replacement card requests in the same benefit period may not indicate increased risk of trafficking. Using this approach in the three selected states, our 2014 analysis reduced the number of households that should be considered for further review compared to the FNS requirement that states look at replacement cards replaced four or more times in 12 months. We then reviewed fiscal year 2012 transaction data for this smaller number of households to identify suspicious activity that could indicate trafficking. We identified 7,537 SNAP recipient households in these three selected states that both received replacement cards in four or more monthly benefit periods in fiscal year 2012, and made at least one transaction considered to be a potential sign of trafficking around the time of the replacement card issuance, as shown in the table below. We found that these 7,537 households made over $26 million in total purchases with SNAP benefits during fiscal year 2012. (see table 3.) We also found that by comparing the number of benefit periods with replacement cards and the total number of transactions flagged for potential trafficking, states may be able to better identify those households that may be at higher risk of trafficking. For example, as shown in the figure below, while there were 4,935 SNAP households in Michigan that received an excessive number of replacement cards, we identified just 39 households that received excessive replacement cards and made transactions resulting in 10 or more trafficking flags. We concluded in 2014 that while state SNAP officials may not want to limit their investigations to such a small number of households, this type of analysis may help provide a starting point for identifying higher priority households for further review. Furthermore, we reported that our more targeted approach may also be particularly helpful given that states had limited resources for conducting investigations. In 2014, we reported that FNS had increased its oversight of state anti- fraud activities in recent years by issuing new regulations and guidance, conducting state audits, and commissioning studies on recipient fraud since fiscal year 2011. For example, in fiscal year 2013, for the first time, FNS examined states’ compliance with federal requirements governing SNAP anti-fraud activities through Recipient Integrity Reviews. These assessments included interviews with state officials, observations of state hearing proceedings, and case file reviews in all 50 states and the District of Columbia. Following these reviews, FNS regional officials issued state reports that included findings and, where appropriate, required corrective actions. Despite these efforts, at the time of our report, FNS did not have consistent and reliable data on states’ anti-fraud activities because its reporting guidance lacked specificity. For example, through our review of the 2013 Recipient Integrity Review reports, we also found that FNS had a nationwide problem with receiving inaccurate data on state anti-fraud activities through the Program and Budget Summary Statement (Form FNS-366B). Some federal and state officials we interviewed recognized that there was not a consistent understanding of what should be reported on the FNS-366B form because the guidance from FNS was unclear. For example, on the form in place during the time of our report, FNS instructed states to report investigations for any case in which there was suspicion of an intentional program violation before and after eligibility determination. According to state and federal officials we interviewed, this information did not clearly establish a definition for what action constitutes an investigation and should then be reported on this form. After reviewing states’ reports, we found examples of inconsistencies in what states reported as investigations on the FNS-366B forms. Specifically, in fiscal year 2009, one state had about 40,000 recipient households, but reported about 50,000 investigations. During the same year, another state that provided benefits to a significantly larger population (about 1 million recipient households) reported about 43,000 investigations. Officials from the state that served the smaller population, but had the larger number of investigations, explained that they included investigative activities such as manually reviewing paper files provided by the state’s Department of Labor for each SNAP recipient with reported wages in the state. Officials from the state that served the larger population said that they counted the number of times a potential fraud case was actively reviewed by investigators, including interviews with witnesses and researching of related client information. Given these differences, state officials said that FNS and states were not able to compare program integrity performance because there was no standardization of data collection across states. As a result of our 2014 findings, we made several recommendations, and FNS officials agreed with all of these recommendations and are taking actions to address them. Specifically, we recommended that the Secretary of Agriculture direct the Administrator of FNS to: explore ways that federal financial incentives can better support cost- effective state anti-fraud activities; establish additional guidance to help states analyze SNAP transaction data to better identify SNAP recipient households receiving replacement cards that are potentially engaging in trafficking, and assess whether the use of replacement card benefit periods may better focus this analysis on high-risk households potentially engaged in trafficking; reassess the effectiveness of the current guidance and tools recommended to states for monitoring e-commerce and social media websites, and use this information to enhance the effectiveness of the current guidance and tools; and take steps, such as guidance and training, to enhance the consistency of what states report on their anti-fraud activities. While FNS agreed with the recommendations and is taking steps to address them, it has yet to fully develop the detection tools and improved reporting methods that would address these recommendations. To explore ways to provide better federal financial incentives, FNS reported it published a Request for Information in the Federal Register in 2014 to solicit state and other stakeholder input on how it could more effectively incentivize states to improve overall performance, including in the area of program integrity, with new bonus awards. However, more recently, FNS officials reported that, based on the feedback from this process, they have decided not to pursue bonus awards for anti-fraud and program integrity activities at this time. At the time of our 2014 report, FNS officials also stated they could not make changes in the state retention rate for overpayments without a change to federal law. FNS officials reported that they have provided states with technical assistance for how to effectively utilize replacement card data as a potential indicator of trafficking. Specifically, FNS has worked with seven SNAP state agencies: New York (Onondaga County), Pennsylvania, South Carolina, Wisconsin (Milwaukee County), California (Los Angeles County), Kansas, and Texas to help these states more effectively identify SNAP recipient trafficking, using models that incorporate predictive analytics. FNS officials stated that the models use a variety of eligibility and transaction data, including replacement card data, and have demonstrated a significant improvement in effectiveness in these states. According to FNS officials, over 90 percent of South Carolina’s investigations of potential trafficking resulted in disqualifications from SNAP, which FNS officials stated is an increase of 29 percent from the state’s investigation success rate prior to using FNS’s model. Based on these state results, FNS officials stated that FNS was targeting four additional states in fiscal year 2016 for technical assistance in implementing the model: Arizona, the District of Columbia, Utah, and Washington. Furthermore, as of May 2016, FNS officials had reported that FNS is conducting a training program for state technical staff to teach them how to build predictive models that incorporate the use of card replacement data. FNS officials also reported that they continue to provide technical assistance to states on the effective use of social media and e-commerce monitoring and have further studied the use of these tools. Most recently, FNS officials reported that, in 2016, the agency conducted an analysis to evaluate states’ current use of social media in their detection of SNAP trafficking. Based on the information gained through this analysis, FNS officials reported that they plan to determine how best to present further guidance to state agencies on using social media to combat trafficking. As of May 2016, FNS had also redesigned the form FNS-366B used to collect consistent recipient integrity performance information and submitted a draft to the Office of Management and Budget (OMB). FNS officials anticipate OMB approval of the revised form prior to the end of fiscal year 2016, and the form is expected to be implemented in fiscal year 2017. FNS reported it published an interim final rule on January 26, 2016, (effective March 28, 2016), changing the reporting frequency of the form from an annual submission based on the state fiscal year to a quarterly submission based on the federal fiscal year. To date, FNS officials reported that they provided 4 separate trainings to approximately 400 state agency and FNS regional office personnel, covering the new and modified elements of the final draft form and the corresponding instructions. - - - - - In conclusion, the challenges that states have faced in financing and managing recipient anti-fraud efforts heighten the need for more efficient and effective tools for safeguarding SNAP funds. In order to provide useful guidance to best guide states in these efforts, FNS officials need reliable information on what can currently be done with available federal and state resources. As of May 2016, FNS officials have reported progress in studying current anti-fraud approaches and developing better data on them but are still in the process of developing the final tools and guidance for enhancing the integrity of the SNAP program. Chairmen Meadows and Lummis, Ranking Members Connolly and Lawrence, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this statement, please contact Kay Brown, Director, Education, Workforce, and Income Security Issues, at 202-512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include James Bennett, Kate Blumenreich, Alexander Galuten, Danielle Giese, Scott Hiromoto, Kathryn Larin, Flavio Martinez, Jessica Orr, Deborah Signer, Almeta Spencer and Shana Wallace. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In fiscal year 2015, SNAP, the nation's largest nutrition support program, provided about 46 million people with $70 billion in benefits. Fraud has been a long-standing concern in the program, and state agencies are responsible for addressing SNAP recipient fraud. In 2014, GAO reviewed state and federal efforts to combat SNAP recipient fraud. This testimony summarizes: (1) findings from GAO's 2014 report and (2) the steps FNS has taken since then to address GAO's recommendations. For its 2014 report, GAO reviewed relevant federal laws, regulations, guidance, and documents; interviewed officials in 11 states; interviewed federal officials; tested fraud detection tools using fiscal year 2012 program data, the most recent available at the time of GAO's report; and monitored websites for potential trafficking online. Although GAO's results are not generalizable to all states, the selected states served about a third of SNAP recipient households. For this statement, GAO reviewed FNS's actions to date on its recommendations. In 2014, GAO found that selected states employed a range of tools to detect potential Supplemental Nutrition Assistance Program (SNAP) recipient fraud, but they faced challenges, including inadequate staffing levels, that limited the effectiveness of their actions, and the Food and Nutrition Service (FNS) lacked data about the states' efforts. The 11 states GAO studied reported using detection tools required or recommended by FNS, among others, to combat SNAP recipient fraud. However, 8 of these states reported difficulties in conducting fraud investigations due to reduced or stagnant staff levels and funding despite program growth, and some state officials suggested changing the financial incentives structure to help support the costs of investigating potential fraud. GAO also found limitations to the effectiveness of website monitoring tools and the analysis of card replacement data states used, under the direction of FNS, for fraud detection. Specifically, GAO found FNS's recommended website monitoring tools to be less effective than manual searches and impractical for detecting internet posts indicative of SNAP trafficking—the misuse of program benefits to obtain non-food items. Further, although FNS required states to monitor SNAP households that request at least four replaced electronic benefit transfer (EBT) cards in a year, GAO found that multiple EBT card requests in the same benefit period may not indicate increased risk of trafficking. GAO found that, by adjusting the analysis to focus on SNAP households that both requested cards in at least four different monthly benefit periods and engaged in suspicious transactions, states could possibly detect potential fraud more accurately. For example, in 2014, GAO found that 4,935 SNAP households in Michigan received at least 4 replaced EBT cards in a year. However, out of these householders, GAO identified 39 households that both received multiple replacement cards in at least four different monthly benefit periods and engaged in suspicious transactions indicative of SNAP trafficking, resulting in 10 or more trafficking flags. GAO reported that this type of targeted analysis may help provide states with a starting point for identifying higher priority households for further review, which can be particularly helpful given that states had reported having limited resources for conducting investigations. GAO also found that, despite FNS's increased oversight efforts at that time, it did not have consistent and reliable data on states' anti-fraud activities because its reporting guidance lacked specificity. For example, the FNS guidance did not define the kinds of activities that should be counted as investigations, resulting in inconsistent data across states. In 2014, GAO recommended, among other things, that FNS reassess current financial incentives, detection tools, and guidance to help states better combat fraud. As of May 2016, FNS reported progress in studying current anti-fraud approaches and developing better data on them, and is in the process of developing the final tools and guidance states need to help enhance the integrity of the SNAP program. In 2014, GAO recommended that FNS reassess its financial incentives for state anti-fraud efforts and tools for website monitoring; establish additional guidance related to EBT replacement card data; and enhance the reliability of state reporting. FNS agreed with GAO's recommendations and has been taking steps to address them. GAO is not making new recommendations in this testimony statement. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Although most organizational components within IRS are involved, the Tax Forms and Publications Division within IRS’s Media and Publications Division of the W&I Division is primarily responsible for creating and improving tax forms, instructions, and other documents. One goal of the Tax Forms and Publications Division is to make tax forms and instructions as clear and understandable as possible. It is divided into three branches— Individual Forms and Publications, Business Forms and Publications, and Tax Exempt/Government Entities and Specialty Forms and Publications. As of late January 2003, 103 persons were assigned to the Tax Forms and Publication Division, including about 15 persons whose primary responsibility was creating, revising, and reviewing individual income tax forms and instructions. Many tax forms and instructions are revised annually, often with short turnaround times and in response to tax law changes. IRS also periodically reviews tax forms and, when appropriate, schedules them for revision. According to IRS’s estimate, it revised about 450 tax forms and instructions in 2001 that affected individual and business tax returns. In addition to tax law changes, revisions to tax forms and instructions generally reflected procedural changes, legal rulings, and feedback from internal and external stakeholders about the understandability of forms and instructions. As illustrated in figure 1, the annual tax forms development process generally starts with a review of the current tax forms. IRS’s tax law specialists review the existing forms and instructions to determine what changes, if any, may be needed to reflect tax law changes and other requirements. The tax law specialists consider comments from a variety of sources both within and outside IRS. For example, comments may be obtained from IRS customer service staff in toll-free call centers who answer calls from taxpayers and have firsthand knowledge of particular forms or instructions that were confusing to taxpayers. IRS’s Taxpayer Advocate Service staff may also provide comments useful to the tax law specialists. The Tax Forms Coordinating Committee, comprised of representatives from all of IRS’s key components, the Department of the Treasury, and IRS’s Chief Counsel, reviews draft forms to help ensure that they are not overly burdensome and that they conform to legal and technical requirements. Draft forms are generally posted to IRS’s external website so that external stakeholders and taxpayers may review and comment on them. The Office of Management and Budget (OMB) is responsible for approving each form once every 3 years. The purpose of OMB’s approval is to assess IRS’s compliance with the Paperwork Reduction Act, which, among other things, requires agencies to assess the extent of burden the information they collect imposes on the public. Under the Paperwork Reduction Act, OMB must approve new forms and major revisions to existing ones. After OMB’s approval, the forms and instructions are sent to IRS’s vendors to be printed. Generally, IRS needs to have approved forms ready for printing by early October to ensure that they can be printed and distributed to the public when the tax filing season starts the following January. As shown in table 1, IRS used taxpayers or IRS employees to test the clarity of five individual income tax forms and instructions from July 1997 through June 2002. IRS relied primarily on focus groups to do the testing. Contractors using private citizens did three of the tests (i.e., Earned Income instructions, Child Tax Credit instructions, and Schedule D) and IRS did the other two using IRS employees. The two testing methods that IRS used—focus groups and one-on-one interviews—are among the commonly used methods to obtain data from individuals on whether documents such as forms and instructions are clear and understandable. Focus groups generally consist of a small number of participants—about 8 to 12 persons—-and are usually selected and organized around the focus group topic. Focus groups are a form of group interviewing that relies on interaction within the group to obtain the impressions of a group of people but not necessarily the impressions of each participant. One-on-one interviews, which aim to obtain individual attitudes, beliefs, and feelings, are used to probe individuals about specific difficulties they may have with completing a form or reading instructions. In some instances, these methods may be used in combination depending upon the particular circumstances of the test. As noted in table 1, IRS used both focus groups and one-on-one interviews in testing the Schedule D form. Testing written documents such as tax forms and instructions helps ensure they are clear, thereby benefiting taxpayers and IRS. Researchers from three federal agencies and a private research firm said testing leads to clearer documents as well as more accurate responses. Our guidance on developing and using questionnaires also recommends testing prior to distribution. IRS’s experience indicates that testing likely improves the clarity of tax forms and instructions and may therefore help reduce the number of errors taxpayers make. Although limited data were available on the costs and benefits of testing IRS forms and instructions, recent changes to Earned Income Credit (EIC) and Child Tax Credit forms illustrate the potential for benefits to significantly exceed the costs of testing. Researchers from the National Center for Health Statistics (NCHS), the Census Bureau, and the Bureau of Labor Statistics (BLS) said that testing helps ensure that their documents are clear, and therefore users are more likely to understand them and complete them correctly. Consequently, researchers from the agencies said they routinely test written documents, such as forms or surveys, prior to public distribution. A representative of a private research firm also said that testing ensures that documents communicate clearly and that his firm and many others perform such testing for a wide variety of private and public clients. The researchers from the federal agencies told us that the benefits of testing are difficult to quantify and, in some instances, may not be quantifiable. NCHS officials stated that the experience they have gained over time from testing and revising documents has helped them develop clearer forms from the outset, a benefit that may not be quantifiable. A Census researcher also agreed that testing is beneficial but difficult to quantify. Nevertheless, the Census researcher said that testing documents prior to public use helps ensure that they are clear and understandable, which gives the agency a greater chance of receiving accurate responses and which lessens the need for follow-up interviews. Similarly, a BLS researcher noted that, though difficult to quantify, testing benefits the agency by reducing errors made by respondents and those reductions can result in savings of time, money, and effort for BLS. See appendix II for additional information on the use of testing in these three agencies. The representative of the private research firm also provided some perspective of its experiences in conducting tests for public and private sector customers. This firm, like many similar companies, provides a variety of testing and data collection services to public and private sector customers. The firm arranges and conducts focus groups, one-on-one interviews, and other tests in order to ensure the clarity of forms, instruction manuals, surveys, and Web sites among other things. Focus groups generally involve 12 participants and cost around $6,700 to $7,600, excluding costs to develop the item to be tested but including incentive pay for participants that could range from $25 to $50 per participant. These costs may be higher, the representative said, if the participants come from special groups. For example, if the participants are medical doctors, incentive pay could be as high as $250 per participant. The researcher also said that private firms vary in how much they spend to test their documents. He estimated that firms generally spend between $300,000 and $500,000 to ensure that a form or document is clear and will meet the firm’s needs. In some cases, firms spend more than $500,000 to do a series of tests, making revisions between each test, before they arrive at a final version of a form or document. He also added that most firms do not spend at levels that would allow them to test all their forms and documents. While maintaining that testing is beneficial, researchers also stated that testing is not fail-safe. It can help identify particular parts of a form that are not clear, researchers said, but it cannot ensure that subsequent changes to the form will entirely resolve the clarity issue. In addition, testing may identify problems participants have in completing written documents, but the participants’ problems may be related to other issues, such as poor math skills, rather than confusing or unclear documents. Testing questionnaires before distribution is also recommended as a quality assurance measure in our guidance on developing and using questionnaires. According to our guidance, testing questionnaires before they are used is one of the best ways to ensure that the document actually communicates what it was intended to communicate and that users will uniformly interpret it. Testing increases the likelihood that respondents will provide the information needed and helps to alleviate inaccurate responses. Our guidance is also consistent with professional literature on survey design. According to professional literature, “reducing measurement error through better question design is one of the least costly ways to improve survey estimates. For any survey, it is important to attend to careful question design and pretesting.” IRS’s recent limited experience with testing indicates that testing may help ensure the clarity of tax forms and instructions. In 1999, IRS revised the forms and instructions related to the EIC and the Child Tax Credit, tested the revised forms and instructions, and revised them again based on the test results. The following year, when the revised and tested forms and instructions were used by taxpayers, the error rates for the EIC and the Child Tax Credit decreased by 28 and 35 percent, respectively, as shown in table 2. IRS officials told us they attribute part of the decrease in EIC errors to a new approach officials developed for structuring EIC forms and instructions and part to the improvements in the draft documents that resulted from testing the revised forms and instructions. Before the EIC forms and instructions were revised, the instructions included a definition and example describing a qualifying child that taxpayers had to interpret. Incorrectly claimed qualifying children have been a major source of EIC errors. IRS revised the instructions so that taxpayers would answer a series of “yes/no” questions to determine if they have a qualifying child instead of relying on their interpretations of the definition or example of a qualifying child. They then tested the old format and the new format. The number of errors decreased substantially when taxpayers used the new format. IRS then made some final changes to clarify the instructions based on test results. In IRS officials’ opinions, the “yes/no” format made it clearer for taxpayers to determine if they had a qualifying child. The benefits of testing some changes to IRS’s forms and instructions can considerably exceed IRS’s costs to do tests, especially because so many taxpayers can be affected by improvements in clarity that may result from testing. IRS’s contract costs, including travel, for testing changes to the EIC and Child Tax Credit forms and instructions were about $56,000 and these costs may have been offset within IRS alone in the year that the change was implemented. More significantly, if testing changes to forms and instructions for these credits led to a 1-minute reduction, on average, in the time taxpayers needed to understand and complete the forms during the 2000 tax filing season, affected taxpayers would have saved 240,000 hours worth $1.2 million valued at minimum wage. Testing has the potential to yield a wide range of benefits to taxpayers and IRS. Table 3 summarizes some of the potential benefits that could result if testing helps clarify tax forms and instructions. If the form or instruction that has been revised and tested remains unchanged, some potential benefits could recur annually for the life of the form or instruction. From the taxpayer’s perspective, benefits from testing could include avoiding the burdens associated with (1) interacting with IRS if they make mistakes due to unclear forms and instructions and (2) understanding and complying with unclear forms and instructions. From IRS’s perspective, benefits are generally in the form of opportunities to use its resources better serving other taxpayers and enforcing the tax laws. Because IRS makes many changes to forms and instructions affecting individual taxpayers every year, ranging from very simple to more complex changes, the benefits can vary according to the type of changes made. Some forms or instructions may change simply to update certain dollar thresholds based on inflation and these changes may be unlikely to be confusing or unclear to taxpayers. However, in other cases changes may introduce new requirements or concepts to taxpayers, such as when new rules are established through legislation or regulation. Changes intended to address situations like these may be more likely to be confusing or unclear to taxpayers, which could result in a burden on taxpayers to understand their obligations in preparing their tax forms and, possibly, to errors that lead to subsequent interactions with IRS to correct their returns. To the extent that a form or instruction is unclear and the lack of clarity leads to taxpayer errors, the method IRS uses to detect the errors can affect the costs IRS incurs as a consequence. If a taxpayer’s error can be detected by IRS and corrected under its “math error” procedures, which rely extensively on automated processes, the cost to IRS to correct the error is likely to be small. On the other hand, if unclear forms or instructions lead to compliance errors that are detected and addressed through audits conducted through the mail, in IRS offices, or in the taxpayer’s location, the costs to IRS are likely to be higher in part because these processes are more labor intensive. The burden and costs taxpayers might avoid if testing helps clarify forms and instructions, and thereby helps taxpayers avoid errors, can vary substantially just as IRS’s costs can vary. In general, because taxpayers need only respond if they disagree with an IRS notice stating that it has corrected an error under its math error procedures, the taxpayer’s burden and cost are likely to be lower than if IRS contacts the taxpayer as part of an audit since audits require taxpayer responses and reviews of taxpayers’ books and records. Illustrations we developed of the potential benefits and costs of testing forms and instructions show that at least in some cases benefits can be substantially greater than the costs to IRS to do tests. The benefits of testing to IRS alone can potentially exceed its testing costs in the first year a change is implemented. But, primarily because a small change in the time required of taxpayers to understand their tax obligations can total to a large aggregate benefit, taking taxpayers’ benefits into account can yield total benefits substantially above IRS’s costs. IRS officials have not attempted to develop quantitative estimates of the benefits to taxpayers and IRS that may result from testing forms and instructions and the costs IRS incurs to achieve those benefits. IRS officials did believe that because taxpayers made fewer errors when using the revised EIC and the Child Tax Credit forms and instructions as shown in table 2, IRS spent less time and money correcting errors related to them. The officials said they could not quantify the cost savings because IRS does not track error correction costs by type of error. To provide some perspective on the potential magnitude of benefits and costs that may be realized due to testing changes to forms and instructions, we analyzed the changes IRS made to EIC and Child Tax Credit forms and instructions. Our analyses are illustrations and not actual assessments of benefits and costs that were associated with testing these forms and instructions because complete data were not available on the potential benefits and costs. Further, in constructing our illustrations we sought to be conservative in estimating benefits, in part because we did not have information on the full range of costs IRS incurred to test forms and instructions. Our illustrations focus on (1) a narrow set of benefits to IRS alone due to potential reductions in taxpayer errors, (2) those benefits plus certain benefits to the taxpayers from reduced errors, and (3) potential benefits to taxpayers in reduced time to do their taxes. See appendix I for details on the methodology we used in developing our illustrations. Our first illustration quantifies a narrow set of benefits to IRS alone from testing EIC and Child Tax Credit forms and instructions—that is, the benefits IRS may have realized due to reduced numbers of errors that are handled under its math error procedures. It is likely that to the extent testing contributed to better taxpayer understanding of these two credits, IRS would have obtained other benefits. For instance, because improperly claimed qualifying children is one of the leading causes of the EIC’s high noncompliance rate, if clarified EIC forms and instructions lead fewer taxpayers to improperly claim the EIC, IRS would likely be able to free some of its EIC-related audit resources for other audits or to audit EIC returns that it might otherwise have had insufficient resources to cover. In fiscal year 2002, IRS used about 1,400 full-time equivalent (FTE) staff years for correspondence audits of EIC issues. Our analysis in table 4 shows the amount by which IRS’s potential cost savings from not having to correct EIC and Child Tax Credit errors may have exceeded its testing costs given differing assumptions about how much testing alone may have contributed to reduced taxpayer errors. As illustrated, IRS would have saved more in cost avoidance (thereby freeing resources to work elsewhere) in the first year of the change alone than it spent on the contracted testing of the forms and instructions if half of the reduction in errors was due to testing. If only 10 or 25 percent of the reduction was due to testing, then IRS would not have saved more than it spent on the testing contract in the first year. However, some of the benefits of a change in forms or instructions continue to be realized in future years. Again, the illustration does not consider other benefits IRS may have realized. To provide some perspective on how the potential benefits to taxpayers from testing EIC and Child Tax Credit forms could affect the overall benefits and costs of testing, we next looked at potential reduced burden from credit claimants receiving fewer notices due to reduced errors. First, we assumed that on average all taxpayers receiving an error notice from IRS take 2 or 5 minutes to deal with the notice. Based on those assumptions, we calculated the value to taxpayers of the time saved (using minimum wage levels) from not having to deal with IRS error notices. We used the same assumed reductions in errors due to testing that we made for table 4 and we netted taxpayers’ savings with the savings shown in table 4 for IRS alone. As table 5 shows, including testing-related benefits to taxpayers from decreased errors suggests that in the first year following testing of EIC and Child Tax Credit forms and instructions, the net benefit to taxpayers and IRS combined could have been positive except for our lowest assumption about the degree to which testing may have reduced taxpayer errors—our 10 percent assumption. Finally, to illustrate the potential benefits if testing EIC and Child Tax Credit forms and instructions made them clearer and thereby reduced taxpayers’ time needed to understand and complete the credit forms, we calculated the value of time saved by taxpayers (using minimum wage levels) in understanding and completing EIC and Child Tax Credit forms assuming the time saved was 1 minute. Unlike for tables 5 and 6, all taxpayers who used the form or instructions to determine whether they qualified for either credit may have saved time if testing contributed to clearer EIC and Child Tax Credit forms and instructions. However, because we did not know how many taxpayers might have used the forms and instructions for this purpose, in calculating the value of time taxpayers may have saved we used only the number of taxpayers who claimed these credits and did not use paid preparers to prepare their tax returns. Table 6 shows that if testing the credits’ forms and instructions helped clarify them and that led taxpayers to take 1 minute less to understand and complete the forms, credit claimants would have saved a total of about 240,000 hours worth $1.2 million at minimum wage levels. To provide another perspective on the potential magnitude of benefits and costs associated with testing changes to forms and instructions, we also looked at IRS’s experience with the rate reduction credit. This one-time credit was enacted in June 2001. When 2001 tax returns were processed during 2002, over 8 million returns had errors related to the credit. IRS did not test the instructions for computing the rate reduction credit that was included on the Form 1040 for tax year 2001. According to IRS officials, they did not test the instructions because the credit was a one- time event, and in their judgment, they had insufficient time to test it. We reported that some of the taxpayers’ errors were probably due to taxpayers not understanding IRS’s instructions on how to compute the credit. We also reported that the demand for telephone assistance related to the credit was significant during the 2002 filing season, and that some of these calls, based on Taxpayer Advocate Service information, were made because taxpayers did not understand how to compute the credit. Using the same approach to illustrate whether IRS alone may have realized benefits in excess of its testing costs as we did for EIC and Child Tax Credit changes, we developed the illustration shown in table 7. As shown, considering only IRS’s cost and assuming that all errors were corrected by IRS using its math error procedures and assuming IRS would have spent the same amount to test the rate reduction credit instructions as it did for EIC and Child Tax Credit tests, IRS may have been able to save between $233,000 and $666,000. Although this case is somewhat atypical since the rate reduction credit affected essentially all individual taxpayers and the number of errors related to the credit was unusually high, these figures illustrate that the potential for savings to IRS alone from testing instructions at times can substantially exceed its testing costs. However, just as with EIC and Child Tax Credits, taxpayers would have benefited if testing had been done, led to clearer instructions and consequently led to fewer taxpayer errors for the rate reduction credit. Using the same approach we used for EIC and Child Tax Credits, table 8 shows the potential taxpayers’ savings from dealing with fewer rate reduction credit error notices and the net savings to taxpayers and IRS. Table 9 shows that if testing had been done and it improved the clarity of the instructions enough to save taxpayers, on average, 30 seconds in understanding whether and how they needed to complete the credit line on their tax returns, the savings would have been larger than savings to IRS and taxpayers from avoided errors alone. Although IRS officials said that making greater use of testing to improve clarity of forms and instructions could be beneficial, officials have not addressed the two constraints—time and resources—that they state limit their ability to do more testing of changes to forms and instructions. Time constraints are not binding for some changes IRS considers to forms and instructions, although IRS cannot realistically test the unknown portion of the changes that are due to laws passed shortly before, or even after, the effective dates for the forms. Also, IRS’s procedures for developing and revising forms (1) do not clearly specify which draft version of forms and instructions should be tested with taxpayers or (2) when in the annual forms development cycle testing should occur. In addition to tight time frames, officials also say that limited resources, such as only one person responsible for coordinating all testing efforts in the Forms and Publications Division, preclude them from increasing tests of forms and instructions. However, IRS has not documented which changes to forms and instructions likely would benefit from testing or demonstrated the benefits that are gained when testing is done. IRS’s planning and budgeting process uses such information in determining the level of resources to be allocated to various units. IRS officials told us that when new tax laws are enacted during the year that require IRS to create or revise tax forms and instructions in time to distribute them to taxpayers by January 1, the start of the tax-filing season, they lack time to test the forms and instructions before distributing them to taxpayers. However, not all changes to forms and instructions are time constrained and IRS’s procedures lack a clear target for which version of forms and instructions should be tested with taxpayers. While sufficient data were not available to determine the portion of changes IRS makes to forms and instructions that cannot be tested due to time constraints, not all changes are time constrained. Due to the variability in the time that may be required to test a form or instruction and in the amount of time IRS needs to develop the initial form or instruction to be tested, we cannot say definitively when IRS may or may not have sufficient time to conduct tests. In some cases, IRS likely could have sufficient time to do testing when it identifies a needed change to forms or instructions itself since it largely controls the scheduling of this work. Similarly, when the Congress passes a law that is not effective until a future tax year, or that contains provisions that are not effective until a future tax year, IRS may have sufficient time to conduct tests. For example, the Economic Growth and Tax Reconciliation Act of 2001 was passed on June 7, 2001, with some provisions effective for tax year 2001, but others with later effective dates. The provisions modifying education Individual Retirement Accounts were effective for taxable years beginning after December 31, 2001. This gave IRS approximately 16 months to develop and test any modifications to tax forms and instructions and make final revisions before those forms and instructions needed to go to printing for distribution by January 2003. When a law affects the current tax year, i.e., changes how taxpayers will need to calculate their taxes in the next tax-filing season, IRS is less likely to have sufficient time to test. Even in such a case, however, the new law may be passed early enough to allow testing. IRS’s current procedures for developing and revising forms and instructions do not clearly specify which draft version of forms and instructions should be tested with taxpayers or when in the annual forms development cycle testing should occur. Officials said that draft forms may be tested with taxpayers either before or after they are posted to IRS’s website for external comments by the public, tax practitioners, software developers, and others. Tax Forms and Publications Division officials said that they consider the particular circumstances surrounding the development of each form and instruction when deciding which version of a draft form or instruction they should test. However, because IRS does not have a clear targeted time for testing, IRS’s ability to plan and conduct tests maybe constrained. If IRS’s procedures defined a point in the annual forms development cycle where a version of a draft form or instruction would be available for testing, IRS would be able to establish processes and deadlines designed to ensure that the opportunity for testing is realized. To the extent that a draft version of a form or instruction is available for testing early in the process, it would give IRS a fuller range of options for testing. For example, if IRS tested draft versions of forms and instructions before or during the approximately 3-week period that the form is available on its Web site, this would minimize any additional calendar time that testing might otherwise add to IRS’s forms development process. Figure 2 shows the points in IRS’s annual forms development process where testing can occur. As illustrated, testing may be conducted early in the process and late in the process. Testing earlier drafts of forms and instructions would also enable officials to select from various testing alternatives depending on how early a draft is available for testing. We did not find a uniform amount of time needed to test a change to a form or instruction. At the low end of the spectrum, an official from NCHS said that it takes about 7 weeks to test that agency’s questionnaires using one-on-one interviews. IRS officials estimated that when IRS employees are used as focus group participants it requires about 8 to 12 weeks to schedule and conduct the tests, analyze the data, and prepare a report summarizing the results. IRS officials estimated that when they contract with a private firm to conduct focus groups using private citizens, 24 to 32 weeks are required to obtain a contract, recruit participants, conduct the tests, analyze the results, and prepare a report. This time frame is based on using regular contracting processes involving developing a statement of work, soliciting bids, and selecting a contractor. Contract options exist that enable agencies to identify a firm or group of firms qualified to undertake work so that an expedited task order procedure can be used to select a firm for when needs arise. According to IRS officials, they recently entered into a multiyear contract with two vendors that will enable them to issue task orders when work is needed. Although IRS’s Tax Forms and Publications Division officials believe current resources are insufficient to support more testing of forms and instructions, they do not have some of the information needed to determine whether to allocate additional resources. This information is not available at least in part because division guidelines and policies do not require that it be gathered. Officials said that because they have so few staff available to conduct tests and have a limited budget to contract for testing, they could not increase the number of tests they perform. According to the officials, currently only 1 of 103 persons in the division is trained in testing methods. In addition to other duties, this person coordinates the tests for the division, such as the test of EIC forms and instructions completed in 1999 by a private vendor and the test of the innocent spouse application form completed by IRS in 2002. Officials also told us some staff who are primarily responsible for creating and revising tax documents may occasionally assist in conducting tests, such as the three persons involved in testing the innocent spouse form. Officials also said the total budget for contract support for the division was $150,000 in fiscal year 2002, $185,000 in fiscal year 2001, and $130,000 in fiscal year 2000. As part of its annual planning and budgeting process, IRS management determines what resources will be needed to accomplish strategies and implement programs. IRS’s planning and budget guidance requires that each operating unit prepare a business plan that, among other things, clearly defines priorities and resource requirements. Requests in the business plan for resources must be substantiated with evidence that allocating additional resources is justified. However, the division does not systematically identify when testing would be beneficial and does not routinely demonstrate the benefits to taxpayers and IRS that have been gained from such testing. Officials do not identify which of the many changes it makes to forms and instructions each year would most likely benefit from testing. Thus, the officials cannot tell IRS management how many opportunities to improve forms and instructions may be lost due to current resource levels. Further, when tests are performed, officials do not identify, quantitatively or qualitatively, the benefits that taxpayers and IRS may have realized. One reason that IRS does not have data on forgone testing opportunities is that the division lacks formal, written guidelines and procedures for determining when testing would be beneficial. Currently, testing is an optional step in the process for developing forms. IRS’s Tax Forms and Publications officials said that they decide which forms to test based on informal guidelines and procedures and input from officials in IRS’s four operating division program offices and the Taxpayer Advocate Service. The informal guidelines and procedures call for officials to weigh, among other things, whether a form or instruction (a) affects a large number of taxpayers, (b) has a high error rate based on taxpayers’ prior use of the form, (c) is perceived as complex, and (d) will be used for several filing seasons. Also, according to IRS, the amount of time available to perform tests is factored into testing decisions. These informal guidelines do not require officials to consider in all cases whether testing would be beneficial and to document the decisions made. Accordingly, even if the informal guidelines are applied, and officials judge that some forms or instructions could benefit from testing but cannot be tested due to scarce resources, those decisions are not made systematically and documented. Further, although the factors the guidelines suggest taking into account appear to have evolved from officials’ experience and therefore should be useful, they do not consider some pertinent factors that could affect the benefits likely to be realized from testing. For instance, the guidelines suggest taking the number of affected taxpayers into account but not the likely amount of burden they would face due to unclear forms or instructions. They also do not clearly call for officials to consider the costs to test forms and instructions and the benefits that may accrue throughout IRS, such as in telephone service centers. In addition, these informal guidelines and procedures automatically exclude testing forms and instructions that will be used only one time. Also, according to IRS officials, the time frame between the passage of new tax laws and when the newly created or revised forms and instructions must be finalized may preclude some forms and instructions from being tested. Even if one-time- use forms meet other testing criteria, such as affecting a large number of taxpayers who may perceive them as complex, IRS will not consider testing them. As the rate reduction credit situation discussed earlier illustrates, such automatic exclusions may not be appropriate in all situations. IRS officials do not have information on the results achieved when forms and instructions are tested in part because the division does not have policies that require such evaluations. When IRS obtained information on the reduction in error rates following testing of EIC and Child Tax Credit forms and instructions, the studies did not include collecting other information on the benefits that may have resulted for taxpayers and for IRS. For instance, the studies did not estimate the savings IRS may have realized in its telephone and walk-in service due to increased form clarity. Capturing fuller information on the results of testing would be consistent with IRS’s strategic planning and budgeting process, which emphasizes assessing the impact of current programs to efficiently allocate resources. Further, by evaluating results of testing decisions, IRS officials would be able to determine if their testing guidelines and procedures lead to good decisions about when testing is most likely to be beneficial. They may also be able to see if the methods they use to test—for example, focus groups formed by IRS employees or one-on-one interviews with individuals—yield the most effective test results. IRS continually faces the daunting task of developing and revising tax forms and instructions to administer our ever-changing set of federal tax laws. Taxpayers rely on IRS for forms and instructions that are as clear and easy to understand as possible given the complexity of the tax laws and providing clear materials is a key goal of IRS’s Tax Forms and Publications Division. In attempting to meet this goal, IRS has tested an average of one set of forms and instructions each year over the last 5 years. In contrast, officials from three federal agencies that routinely collect information from the public say that testing documents for clarity before using them is their standard practice. They do so because they believe testing will ensure that their data collection documents are clear and that individuals will understand them and complete them accurately. Although it is difficult to gauge how much testing alone contributes to the clarity of tax forms and instructions, IRS officials believe testing has contributed to significant declines in taxpayer errors. Illustrations we developed based on IRS’s experience in testing forms and instructions suggest that IRS can completely recover its testing costs in the first year following testing in some circumstances and that when savings to taxpayers from more understandable forms and instructions are considered, total benefits even in the first year following tests can be several times IRS’s testing costs. Although they recognize that testing is beneficial, officials say time constraints and limited resources preclude more testing. However, IRS’s procedures do not clearly specify when draft versions of forms and instructions should be available for testing. Having a clearly defined point where testing would be performed would facilitate establishing procedures and deadlines to better ensure that testing could be done even within IRS’s annual forms update cycle. Further, IRS officials do not have information that would help IRS management to determine whether to allocate additional resources to support enhanced testing. Because IRS lacks standard written procedures for testing, officials have not documented cases where testing would likely be beneficial and have not demonstrated the benefits that are gained from testing. Because testing could potentially yield clearer and more understandable tax forms and instructions, thereby producing benefits both to taxpayers and IRS, we recommend that the Acting Commissioner of Internal Revenue take the following actions. Develop written criteria for determining which changes to tax forms and instructions should be tested with taxpayers before publication. Develop official written guidance that incorporates those criteria and ensure that the guidance requires staff that develop new or revised forms and instructions to document which changes would merit testing and why. Clarify procedures by designating when in the annual forms development process that a draft version of forms and instructions should be available for testing with taxpayers. Ensure that an appropriate range of evaluations are conducted of tests that are performed to better establish the costs and benefits of performing tests and to refine IRS’s approach to testing on the basis of lessons learned. Use information gained from documenting when changes to forms or instructions likely would be beneficial and from evaluations of tests to reassess an appropriate level of resources to perform testing. The Acting Commissioner of Internal Revenue provided written comments on a draft of this report in an April 7, 2003, letter, which is reprinted in appendix III. The Acting Commissioner agreed with our recommendations. We are encouraged that IRS plans to implement all but one of our recommendations in time for the 2004 forms development cycle. Understandably, the remaining recommendation to ensure that an appropriate range of evaluations is conducted of tests would take more time to put into practice. The Acting Commissioner also provided additional comments and observations on our draft report. The Acting Commissioner commented that the crux of our report is that we do not believe IRS has performed adequate testing on new and revised tax forms and instructions due to a lack of resources. He said that resources for testing forms and instructions have been adequate for the testing IRS wanted to perform. While not questioning whether resources were adequate for the testing IRS performed, we concluded that IRS officials do not have information needed to determine the level of resources that should be allocated to testing forms and instructions. Accordingly, we recommended that IRS systematically identify opportunities to improve forms and instructions through testing and to evaluate the costs and benefits when testing is done. Although agreeing that testing is beneficial, the Acting Commissioner also said that there are significant staff costs associated with testing that are not included in our cost analysis. We recognize that our analysis excluded staff costs and as stated in our draft report we sought to be conservative in estimating the benefits of testing, in part because we did not have information on the full range of costs IRS incurs when undertaking projects to tests forms and instructions. During the course of our work, we requested estimates of staff costs for testing but IRS was unable to provide them. Nevertheless, at least in the cases we illustrated the potential benefits of testing were so much greater than the costs that including staff costs likely would not have substantially changed the results of our illustrations. The Acting Commissioner expressed concern about whether IRS could have forms and instructions ready for the filing season if testing was done late in the forms development cycle as shown in our figure 2 depicting IRS’s process. We agree with the Acting Commissioner’s concern; however, our figure shows the various points at which testing can occur in IRS’s current processes based on interviews with IRS officials and the documentation they provided us. As IRS implements our recommendation to clarify when testing should be done, selecting a point as early as possible would help maximize the number of changes that can be tested during the annual forms update cycle. The Acting Commissioner also said that he disagreed with our conclusion that IRS’s experience with obtaining feedback on its products is limited or recent. He said IRS uses various methods to obtain customer feedback. We agree that IRS uses methods other than testing to obtain feedback on its forms and instructions. However, our report describes the potential benefits and costs of testing as a feedback method. In terms of testing, IRS has only tested five forms and instructions during July 1997 through June 2002; in our view, this is a limited number of tests that were conducted during the recent past. The Acting Commissioner also disagreed that testing would result in reduced demand for walk-in and toll-free assistance. He said that IRS lacks data to support such a conclusion and, based on its experience, new forms generate requests for assistance and error rates on them tend to be higher. We recognize that there will always be a demand for taxpayer customer assistance. However, we believe that reduced demand for assistance is a potential benefit of testing. We note, for example, that IRS officials seek input from telephone assistors when deciding which forms or instructions need to be clarified, apparently believing that clarifying the forms and instructions may help reduce calls to assistors. Finally, testing is one means for ensuring that even for new forms and instructions requests for assistance and errors made by taxpayers will be minimized. We are sending copies of this report to the Chairman and Ranking Minority Member of the House Committee on Ways and Means and its Subcommittee on Oversight; the Secretary of the Treasury; the Acting Commissioner of Internal Revenue; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Charlie Daniel, Assistant Director. If you have any questions regarding this report, please contact him or me at (202) 512-9110. Key contributors to this report were David Alexander, Christopher Currie, Ronald La Due Lake, Anne Laffoon, Veronica Mayhand, Edward Nannenhorn, and Shellee Soliday. To determine how often IRS has used taxpayers to test the clarity of new or revised individual income tax forms and instructions, we interviewed officials in its W&I Division’s Tax Forms and Publications Division in Washington, D.C. We sought to obtain (a) an understanding of the process IRS used to develop and revise individual income tax forms and instructions and (b) gather information on the forms and instructions IRS tested using taxpayers for the 5-year period between July 1997 and June 2002. We requested information for a 5-year span to help ensure that the information collected would reflect the amount of testing usually done by IRS. Our work did not include assessing IRS’s processes for developing and revising notices or publications or assessing the clarity of any specific tax forms or instructions. To obtain insights on the benefits of testing written documents, we interviewed officials from three federal agencies in the metropolitan Washington, D.C., area that perform extensive research and data collection using private citizens. We contacted NCHS, BLS, and the Census Bureau because they have broad experience in conducting tests of the clarity of written documents, such as forms, surveys, and instructions. We also contacted a private research firm that specializes in testing forms and surveys for a variety of public and private sector clients. We also reviewed our own guidance for developing and using questionnaires. To determine the benefits to taxpayers and IRS of testing tax forms and instructions with taxpayers prior to their use by the public, we took several steps. First, we interviewed IRS officials including Tax Forms and Publications Division officials to obtain information on whether they perceived testing as being beneficial to taxpayers and IRS. We also interviewed IRS’s W&I Research Division officials in Indianapolis to obtain their views on whether testing tax forms and instructions benefits taxpayers and IRS. The research division officials, among other things, collect information on taxpayer errors that the Tax Forms and Publications Division uses when deciding which forms and instructions to test. These officials provided us with data on changes in error rates for EIC and Child Tax Credit forms and instructions before and after IRS revised and tested them. Second, we developed illustrations of the potential benefits and costs of IRS’s testing EIC and Child Tax Credit forms and instructions by analyzing the data on the changes in error rates obtained from IRS research officials. To construct our illustrations, we used readily available data and made certain assumptions. Data were not available on many of the potential benefits of the changes and on the full range of costs IRS incurred to conduct the tests. We developed similar illustrations of potential benefits to taxpayers and IRS if the rate reduction credit instructions had been tested. In all cases, the illustrations we developed are not actual assessments of the costs and benefits that were associated with testing forms and instructions, or that would have resulted if testing had occurred. In developing the illustrations we sought to be conservative in estimating benefits, in part because we did not have information on the full range of costs IRS incurred to test forms and instructions. Because we had to make various assumptions, our illustrations undoubtedly vary from actual costs and benefits. To determine the potential cost savings to IRS of testing these forms and instructions, we first estimated IRS’s costs to correct a taxpayer error. Our estimates of IRS’s cost to correct errors were limited to the labor cost associated with correcting errors in IRS’s math error program. IRS provided us the number of FTE staff years for operating the ERS that was used to detect and correct math errors and the number of errors corrected by ERS during fiscal year 2002. We used this information to calculate an average labor cost to correct math errors that we then applied to reductions in EIC and Child Tax Credit errors as well as to error reductions that might have resulted from testing the rate reduction credit. Total error correction costs may be higher because we did not include a number of other costs associated with correcting errors. For example, we excluded printing and postage costs for the notices sent to taxpayers. IRS notices often cover more than one issue associated with a tax return and data were not readily available to determine what portion of the postage cost might be attributable to EIC or Child Tax Credit issues alone. In addition, the estimate does not include costs such as equipment and rent. We did not specifically test the accuracy of the cost information provided; however, our audits of IRS’s annual financial statements have raised concerns regarding IRS’s ability to identify all costs associated with a given program or activity. Because data were not available on the extent that testing reduced errors related to EIC and Child Tax Credit forms and instructions, we assumed different percentage reduction rates in errors due to testing these forms and instructions. Our illustrations show potential savings based on assumed percentage reduction in errors attributable to testing of 50, 25, and 10 percent. Using our estimate of IRS’s costs to correct an error and the assumed number of errors eliminated by testing, we arrived at the costs that would have been incurred to correct those errors. The difference between costs to correct eliminated errors and the costs to test is the potential cost savings from testing. Cost savings due to reduced errors likely would not mean reductions to IRS’s budget. Further, savings likely would mean that IRS would provide services to other taxpayers or would pursue other compliance or tax collection activities that it would otherwise have been unable to do. To determine IRS’s costs to test forms and instructions, we used IRS’s actual contract costs for testing EIC and Child Tax Credits in 1999. IRS spent a total of about $56,000 to test both EIC and Child Tax Credit forms and instructions with focus groups. Although the cost of contracted support for testing each form and instruction individually likely would have been somewhat lower than this, we applied the total cost in each case. Because IRS could not provide data, we did not include the costs associated with IRS letting and managing the contract or the cost for Tax Forms and Publications Division staff to work with the contractor in conducting and managing the tests or the cost of any other IRS staff that were involved in these tests. To determine the potential benefits to taxpayers from testing EIC and Child Tax Credit forms and instructions, we calculated estimated values to taxpayers of the time saved if testing improved form and instruction clarity, thereby reducing taxpayer errors and the burden of dealing with IRS error notices. We also estimated taxpayers’ time saved if testing reduced the time needed to understand and complete tax forms and instructions. We developed similar illustrations for potential benefits to taxpayers if the rate reduction credit had been tested. Our benefit illustrations were based on a series of assumptions. For example, to estimate the value to taxpayers of time saved from not having to deal with an IRS notice, we applied the minimum wage rate that was in effect after IRS revised and tested the forms and instructions to estimates of time taxpayers might save by not having to deal with an IRS notice. For these illustrations we assumed, on average, that taxpayers who received an IRS error notice might spend either 2 or 5 minutes to deal with it. Data were unavailable on how much time taxpayers actually spend dealing with IRS’s notices; however, according to Taxpayer Advocate Service information, IRS’s notices are difficult for taxpayers to understand. Further, taxpayers who decide to contest an IRS notice may take time to call or write letters to IRS or to contact and work with a tax preparer. Data were not readily available to determine what portion of taxpayers who received an EIC or Child Tax Credit error notice contested IRS’s change to their tax returns. For our illustrations of the estimated value of time saved by taxpayers if testing reduced the time needed to understand and complete EIC and Child Tax Credit forms and instructions, we assumed that taxpayers would save on average 1 minute by using clearer forms and instructions. We excluded from our illustrations those tax returns prepared by preparers because the taxpayers might not have had to read and understand the forms and instructions. Because data were not available on the number of tax returns claiming the Child Tax Credit that were prepared by paid preparers, we reduced the number of returns claiming the credit by the same percentage of EIC returns that were prepared by preparers. The percentage of EIC claimants using paid preparers exceeds the average for all taxpayers. For the rate reduction credit, our illustration is based on a 30-second time savings and the number of taxpayers who filed on paper and did not use preparers. We chose a 30-second potential savings for this illustration because many taxpayers would have had to read only part of the instructions to determine what to do. We aggregated the times for all taxpayers and multiplied the total hours saved by the prevailing minimum wage rate to arrive at estimated benefits to taxpayers. To determine whether any factors limited IRS’s ability to use individual taxpayers to test forms and instructions and, if so, how these factors can be addressed, we interviewed IRS’s Tax Forms and Publications Division’s officials and analyzed supporting data they provided us. Regarding officials’ view that they lacked sufficient time to do more testing, we reviewed information on the amount of time IRS, NCHS, and the private research firm we contacted took to perform various types of tests. We also reviewed IRS’s process for developing new and revised forms and instructions and determined how many weeks were available between the dates that various laws were enacted or their provisions became effective and IRS’s normal October 1st deadline for printing. Finally, regarding officials’ view that they lacked sufficient resources to do more testing, we obtained information on the resources available within IRS for testing. We performed our work from May 2002 through March 2003 in accordance with generally accepted government auditing standards. Like IRS, the three federal agencies we contacted create written documents to be completed by the public. The agencies create documents such as forms, surveys, and questionnaires that they use to collect information from the public to fulfill their missions. Unlike IRS, described below, these agencies routinely test their forms, surveys, and questionnaires prior to distribution to the public. The National Center for Health Statistics, the nation’s principal health statistics agency, compiles statistical information to guide actions and policies relevant to public health and health policy. According to NCHS, obtaining accurate and usable health information is crucial to successfully fulfilling its mission to provide reliable information to the Centers for Disease Control and Prevention and the Department of Health and Human Services. NCHS collects information through various sources including questionnaires that it develops and administers. Researchers at NCHS told us that in support of their research they administer surveys and questionnaires each year in addition to developing questionnaires used by the Centers for Disease Control and Prevention. They test each questionnaire using one-on-one interviews. When documents pertain to a particular rather than a general population, the researchers recruit participants with characteristics similar to those persons who might be completing the forms or questionnaires. For example, researchers recruited asthmatics to test a questionnaire related to asthma. NCHS tests forms in one-on- one settings in which a participant may be asked to work through a form while a moderator observes and then later interviews the participant. This approach allows the researcher to identify specific points at which the forms were confusing or problematic and learn why the participant had difficulty. NCHS prefers to use one-on-one interviews when conducting tests because this method closely resembles the ways in which individuals will be completing the documents since individuals will likely complete the documents by themselves. The Census Bureau is the principal agency responsible for collecting and providing data about the people and the economy of the United States. An accurate census is important because census results are used to reapportion seats in the House of Representatives, redraw congressional districts and other political boundaries, and address countless other public and private data needs. The Census Bureau collects information through short-form and long-form questionnaires that it develops, tests, and administers. In preparation for the 2000 Census, the Congress budgeted millions of dollars to develop and test questionnaires during the 1990s. The Census Bureau’s policy requires that demographic survey questionnaires be tested. It has used focus groups and one-on-one interviews to test its questionnaires and forms. For example, in fiscal year 1996, the Census Bureau decided to make fundamental changes to the traditional census design such as shortening census questionnaires. In that year, it budgeted funds to test, among other things, respondents’ understanding of race and ethnicity questions. The Census Bureau has also conducted detailed cost-benefit analyses of alternative designs; in 1992 it tested the simplified questionnaire in order to gauge whether the new form would increase response rates and reduce costly follow up with households that did not respond to the census. The Bureau of Labor Statistics, the principal fact-finding agency for the federal government in the broad field of labor economics and statistics, also depends on clear and understandable written documents to collect accurate information from the public. According to BLS’s policy, testing documents such as forms and surveys prior to use by the general public should be undertaken to help identify factors that may impede users’ ability to understand forms or surveys. Then these factors can be addressed in order to improve the clarity of written documents and increase the accuracy of responses. Testing should be done in the early stages of document development so that any problems with clarity can be identified early. BLS routinely tests its written documents using focus groups and one-on-one interviews, and uses the results of the tests to make improvements to the documents. | Taxpayers rated the Internal Revenue Service's (IRS) ability to provide clear and easy-to-use forms and instructions among the lowest of 27 indicators of service in 1993. Due to continuing concerns about unclear forms and instructions, GAO was asked to determine (1) whether and how often IRS tests the clarity of new and revised individual income tax forms and instructions; (2) the benefits, if any, of testing forms and instructions for clarity prior to their use; and (3) whether any factors limit IRS's ability to do more tests and if so, how they can be addressed. IRS used taxpayers and its employees to test revisions to five individual income tax forms and instructions from July 1997 through June 2002. According to IRS officials, they revised about 450 tax forms and instructions in 2001, many of which were for individual income tax returns. Testing forms and instructions can help ensure their clarity and thereby benefit taxpayers and IRS by, for instance, reducing taxpayers' time to understand and complete tax forms, reducing calls to IRS for assistance, and reducing taxpayer errors. Due to similar benefits, federal agencies we contacted that routinely collect information from the public test their questionnaires. Quantifying benefits due to testing is difficult, but IRS's experience in revising and testing Earned Income Credit and Child Tax Credit forms and instructions suggests that benefits of testing in some cases can considerably exceed the cost of testing. If taxpayers who did their own tax returns needed 1 less minute to understand these two credits due to testing, their time saved, valued at the minimum wage, would be worth $1.2 million; IRS's contracting cost for the two tests was $56,000. Although IRS officials recognized that testing could be beneficial, they cited tight time frames and constrained resources as limiting their ability to do more tests. While IRS faces time constraints when making some changes to forms and instructions due to the passage of new laws, not all changes are time constrained. IRS does not have procedures specifying which versions of draft forms and instructions should be tested with taxpayers or when in its annual forms development process testing should occur. Resources currently available for testing are limited but the office responsible for testing has not developed data on missed testing opportunities and has limited data on the benefits that have been realized when testing occurred. IRS's planning and budgeting process uses such data to support resource allocation decisions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Basic training is the initial training provided to military recruits upon entering service into one of the military services. While the program and length of instruction varies somewhat among the services, the intent of the training is to transform male and female recruits from civilians into military service members. Basic training typically consists of physical conditioning; learning the military service’s core values, history and tradition; weapons qualification; instilling discipline; and nuclear, biological, and chemical protection training along with other training needed for initial entry into the services. The training varies in length—typically 6.4 weeks in the Air Force, 9 weeks in the Army and Navy, and 12 weeks in the Marine Corps. Following completion of basic training, recruits attend advanced individual training to further enhance skills in particular areas of interest (military occupational specialties). Upon arriving at a basic training location, recruits are processed and are generally housed for several days in reception barracks pending their assignment to a training unit and their primary barracks for the duration of the basic training period. For the most part, the housing accommodations within existing barracks are typically the same, regardless of male or female occupancy. DOD standards dictate space requirements of 72 square feet of living space per recruit, but the actual space provided is often less than that for the services, particularly during the summer months when a surge of incoming recruits usually occurs. In the Navy and Air Force, male and female recruits are housed on different floors in the buildings. In the Army, Fort Jackson and Fort Leonard Wood are the only locations where both male and female recruits undergo basic training, and they are housed separately in the same buildings, sometimes on the same floor. In the Marine Corps, all female recruits receive basic training at Parris Island, and they are housed in separate barracks. While the barracks across the services differ in design, capacity, and age, it is common for the barracks to have 2 or 3 floors with central bathing areas and several “open bays” housing from 50 to 88 recruits each in bunk beds. Some of the barracks, such as the Army’s “starships” and the Air Force barracks, are large facilities that house over 1,000 recruits. Others, especially those constructed in the 1950s and early 1960s, are smaller with recruit capacities of about 240 or less. Table 1 provides an overall summary of the number and age of the military services’ recruit barracks, along with the number of recruits trained in fiscal year 2001. As shown in the table, the Army has the largest number of barracks—over 60 percent of the total across the services—and trains nearly one-half of the recruits entering the military. The Army also uses temporary barracks, referred to as “relocatables,” to accommodate recruits at locations where capacity is an issue. Figure 1 depicts an exterior view of recruit barracks at Lackland Air Force Base, Texas, an “open bay” living space at the Marine Corps Recruit Depot at Parris Island, South Carolina, and an Army temporary (relocatable) barracks at Fort Sill, Oklahoma. Until recently, DOD had no readiness reporting system in place for its defense installations and facilities. In fiscal year 2000, DOD reported to the Congress for the first time on installation readiness as an integral element of its overall Defense Readiness Reporting System. At the core of the system is a rating classification, typically referred to as a “C” rating. The C- rating process is intended to provide an overall assessment that considers condition and capacity for each of nine facility classes (e.g., “operations and training,” and “community and housing”) on a military installation. Recruit training barracks fall within the community-and-housing facility class. The definitions for the C-ratings are as follows: C-1—only minor facility deficiencies with negligible impact on capability to perform missions; C-2—some deficiencies with limited impact on capability to perform C-3—significant facility deficiencies that prevent performing some C-4—major facility deficiencies that preclude satisfactory mission accomplishment. Each service has the latitude to develop its own processes in establishing C-ratings for its facilities. The services’ systems for assessing the condition of facilities are: the Army’s Installation Status Report; the Air Force’s Installations’ Readiness Report; the Navy’s Installation Readiness Reporting System; and the Marine Corps’ Commanding Officer’s Readiness Reporting System. These systems generally provide aggregate assessments of the physical condition of facilities based on periodic facility inspections. The Department subsequently aggregates the services’ reports and submits an overall assessment for each facility class to the Congress in the Department’s Quarterly Readiness Report. The majority of the services’ basic training installations had given their recruit barracks a C-3 rating, indicating they have significant deficiencies. Despite the acceptable outward appearance and generally good condition of most barracks’ exteriors, our visits to the training locations confirmed that most barracks had significant (C-3) or major (C-4) deficiencies requiring repair or facility replacement. Our site visits confirmed the existence of significant deficiencies, but we also noted some apparent inconsistencies in service ratings of their facilities’ condition. Conditions varied by location. Among barracks in poor conditions, we observed a number of typical heating and air conditioning, ventilation, and plumbing- related deficiencies that formed the basis of the services’ ratings for their barracks. Base officials told us that, although these deficiencies had an adverse impact on the quality of life for recruits and were a burden on trainers, they were able to accomplish their overall training mission. At the same time, we noted recent improvements had been made to some recruit barracks at various locations. We observed that, overall, the services’ recruit training barracks had significant or major deficiencies, but that conditions of individual barracks vary by location. In general, we observed that the Army’s, Navy’s, and Marine Corps’ Parris Island barracks were in the worst physical condition. Table 2 shows the services’ overall rating assessments for the recruit barracks by specific location and the typical deficiencies in those barracks that form the basis of the ratings. With the exception of Parris Island, all locations reported either C-3 or C-4 ratings for their barracks. These ratings are relatively consistent with the ratings of other facilities within the DOD inventory. Recent defense data show that nearly 70 percent of all DOD facilities are rated C-3 or C-4. Further, as shown in appendix 2, the C-ratings for recruit training barracks are not materially different from the ratings of other facilities at the training locations we visited. The C-ratings depicted in table 2 show the overall condition of the recruit barracks at a specific location, but the condition of any one building within a service and at a specific location could differ from the overall rating. The Army, with the greatest number of barracks, had the most problems. For the most part, the Army’s barracks were in overall poor condition across its training locations, but some, such as a recently renovated barracks at Fort Jackson and a newly constructed reception barracks at Fort Leonard Wood, were in better condition. Similarly, the Navy barracks, with the exception of a newly constructed reception barracks in 2001, were in a similar degraded condition because the Navy, having decided to replace all of its barracks, had limited its maintenance expenditures on these facilities in recent years. Of the Marine Corps locations, Parris Island had many barracks in poor condition, the exception being a recently constructed female barracks. The barracks at San Diego and Camp Pendleton were generally in much better shape. The Air Force’s barracks, particularly five of eight barracks that had recently been renovated, were in generally better condition than the barracks at most locations we visited. Our visits to the basic training locations confirmed that most of the barracks had significant or major deficiencies, but we found some apparent inconsistencies in the application of C-ratings to describe the condition of the barracks. For example, as a group, the barracks at the Marine Corps Recruit Depot, Parris Island, were the highest rated—C2—among all the services’ training barracks. The various conditions we observed, however, suggested that they were among the barracks with the worst physical condition we had seen. Marine Corps officials acknowledged that, although they had completed a recent inspection of the barracks and had identified significant deficiencies, the updated data had not yet been entered into the ratings database. As a result, the rating was based on outdated data. On the other hand, the barracks at the Marine Corps Recruit Depot, San Diego, were rated C-3, primarily due to noise from the San Diego airport that is next to the depot. Otherwise, our observations indicated that these barracks appeared to be in much better physical condition than those at Parris Island because they were renovating the San Diego barracks. After we completed our work, the Marine Corps revised its Parris Island and San Diego barracks’ ratings to C-4 and C-2, respectively, in its fiscal year 2002 report. The Air Force barracks were rated C-3, but we observed them to be among those barracks in better physical condition and in significantly better condition than the Army barracks that were rated C- 3. And the Navy’s C-4 rating for its barracks was borne out by our visits. Similar to the Marine Corps Parris Island and the Army barracks, we found in general that the Navy barracks were in the worst physical condition. In our discussions with service officials, we learned that the services use different methodologies to arrive at their C-ratings. For example, except the Army, the services use engineers to periodically inspect facility condition and identify needed repair projects. The Army uses building occupants to perform its inspections using a standard inspection form. Further, except the Army, the services consider the magnitude of needed repair costs for the barracks at the training locations in determining the facilities’ C-ratings. While these methodological differences may produce inconsistencies in C-ratings across the services, we did not specifically review the impact the differences may have on the ratings in this assignment. Instead, we are continuing to examine consistency issues regarding service-wide facility-condition ratings as part of our broader ongoing work on the physical condition and maintenance of all DOD facilities. Our visits to all 10 locations where the military services conduct basic training confirm that most barracks have many of the same types of deficiencies that are shown in table 2. The most prevalent problems included a lack of or inadequate heating and air conditioning, inadequate ventilation (particularly in bathing areas), and plumbing-related deficiencies. Inadequate heating or air conditioning in recruit barracks was a common problem at most locations. The Navy’s barracks at Great Lakes, for example, had no air conditioning, and base officials told us that it becomes very uncomfortable at times, especially in the summer months when the barracks are filled with recruits who have just returned from training exercises. During our visit, the temperature inside several of the barracks we toured ran above 90 degrees with little or no air circulation. Base officials also told us that the excessive heat created an uncomfortable sleeping situation for the recruits. At the Marine Corps Recruit Depot at Parris Island, several barracks that had been previously retrofitted to include air conditioning had continual cooling problems because of improperly sized equipment and ductwork. Further, we were told by base officials that a high incidence of respiratory problems affected recruits housed in these barracks (as well as in some barracks at other locations), and the officials suspected mold spores and other contaminants arising from the filtration system and ductwork as a primary cause. At the time of our visit, the Marine Corps was investigating the health implications arising from the air-conditioning system. And, during our tour of a barracks at Fort Sill, Army personnel told us that the air conditioning had been inoperable in one wing of the building for about 2 years. Inadequate ventilation in recruit barracks, especially in central bathing areas that were often subject to overcrowding and heavy use, was another common problem across the services. Many of the central baths in the barracks either had no exhaust fans or had undersized units that were inadequate to expel moisture arising from shower use. As a result, mildew formation and damage to the bath ceilings, as shown in figure 2, were common. In barracks that had undergone renovation, however, additional ventilation had been installed to alleviate the problems. Plumbing deficiencies were also a common problem in the barracks across the services. Base officials told us that plumbing problems—including broken and clogged toilets and urinals, inoperable showers, pipe leaks, and slow or clogged drainpipes and sinks—were recurring problems that often awaited repairs due to maintenance-funding shortages. As shown in figures 3 and 4, we observed leaking drainpipes and broken or clogged bath fixtures in many of the barracks we visited. In regard to the broken fixtures, training officials told us that the problems had exacerbated an undesirable situation that already existed in the barracks—a shortage of fixtures and showers to adequately accommodate the demands of recruit training. These officials told us that because of the inadequate bath facilities for the high number of recruits, they often had to perform “workarounds”—such as establishing time limits for recruits taking showers—in order to minimize, but not eliminate, adverse effects on training time. Base officials at most of the locations we visited attributed the deteriorated condition of the recruit barracks to recurring inadequate maintenance, which they ascribed to funding shortages that had occurred over the last 10 years. Without adequate maintenance, facilities tend to deteriorate more rapidly. In many cases that officials cited, they were focusing on emergency repairs and not performing routine preventative maintenance. Our analysis of cost data generated by DOD’s facility sustainment model showed, for example, that Fort Knox required about $38 million in fiscal year 2002 to sustain its base facilities. However, base officials told us they received about $10 million, or 26 percent, of the required funding. Officials at other Army basic training sites also told us that they receive less funding, typically 30 to 40 percent, than what they considered was required to sustain their facilities. Army officials told us that, over time, the maintenance funding shortfalls at their training bases have been caused primarily by the migration of funding from maintenance accounts to support other priorities, such as the training mission. While most barracks across the services had significant deficiencies, others were in better condition, primarily because they had recently been constructed or renovated. Those barracks that we observed to be in better condition were scattered throughout the Army, Air Force, and Marine Corps locations. Even at those locations where some barracks were in very poor condition, we occasionally observed other barracks in much better condition. For example, at Parris Island, the Marine Corps recently completed construction of a new female recruit barracks. At Fort Jackson, the Army repaired windows, plumbing, and roofs in several “starship” barracks and similar repairs were underway in two other starships. Figures 5 and 6 show renovated bath areas at Lackland Air Force Base in Texas and the Marine Corps Recruit Depot at San Diego. The services’ approaches to recapitalize their recruit barracks vary and are influenced by their overall priorities to improve all facilities. The Marine Corps and Air Force are focusing primarily on renovating existing facilities while the Navy plans to construct all new recruit barracks. The Army also expects to renovate and construct recruit barracks, but the majority of the funding needed to support these efforts is not expected to be programmed and available until after 2008 because of the priority placed on improving bachelor enlisted quarters. Table 3 summarizes the services’ recapitalization plans. The Navy has placed a high priority on replacing its 16 recruit barracks by fiscal year 2009 at an estimated cost of $570 million using military construction funds. The Navy recently completed a new recruit reception barracks, and the Congress has approved funding for four additional barracks. Two barracks are under construction with occupancy expected later this year (see fig. 7), and the contract for 2 more barracks was awarded in May 2002. The Navy has requested funds for another 2 barracks in its fiscal year 2003 military construction budget submission and plans to request funds for the remaining 9 barracks in fiscal years 2004 through 2007. The Navy expects construction on the last barracks to be completed by 2009. Navy officials told us that other high-priority Navy-wide efforts (e.g., providing quality bachelor enlisted quarters and housing for sailors while ships are in homeport) could affect the Navy’s recapitalization efforts for recruit barracks. The Army projects an estimated $1.7 billion will be needed to renovate or replace much of its recruit training barracks, but most of the work is long- term over the next 20 years, primarily because renovating and replacing bachelor enlisted quarters has been a higher priority in the near-term. Through fiscal year 2003, the Army expects to spend about $154 million for 2 new barracks—1 each at Fort Jackson and Fort Leonard Wood. Army officials stated that barracks at these locations were given priority over other locations because of capacity shortfalls at these installations. After fiscal year 2003, the Army estimates spending nearly $1.6 billion in military construction funds to recapitalize other recruit barracks—about $359 million to renovate existing barracks at several locations and about $1.2 billion to build new barracks at all locations, except Fort Sill. Only Forts Jackson and Leonard Wood are expected to receive funding for new barracks through fiscal year 2007. Further, the Army does not expect to begin much additional work until after 2008, when it expects to complete the renovation or replacement of bachelor enlisted quarters. As a result, Army officials stated that the remaining required funding for recruit barracks would most likely be requested between 2009 and 2025. The Marine Corps has a more limited recruit barracks recapitalization program, primarily because it has placed a high priority on renovating or replacing bachelor enlisted quarters in the near-term. The three recruit training installations plan to renovate their existing recruit barracks and construct two additional barracks at Parris Island and San Diego. The Marine Corps expects to spend about $40 million in operation and maintenance funds to renovate existing barracks at its training locations by fiscal year 2004. The renovations include replacing the bath and shower facilities, replacing hot water and heating and air conditioning systems, and upgrading the electrical systems. The Marine Corps also expects to spend at least $16 million in military construction for the new barracks by fiscal year 2009. The Air Force has placed a high priority on renovating, rather than replacing its recruit barracks in the near-term. It expects to spend about $89 million—primarily operation and maintenance funds— to renovate its existing barracks and convert another facility for use as a recruit barracks. As of April 2002, the Air Force had renovated 5 of its existing 8 barracks and expected to complete the remaining renovations by 2006. The renovations include upgrading heating, ventilation, and air-conditioning systems as well as installing new windows and improving the central baths. Due to expected increases in the number of recruits, the Air Force has also identified an additional building to be renovated for use as a recruit barracks. The Air Force intends to complete this renovation in fiscal year 2003. Officials at Lackland Air Force Base stated they are currently drafting a new base master plan, which identifies the need to build new recruit barracks starting around 2012. We requested comments on a draft of this report from the Secretary of Defense. An official from the Office of the Deputy Under Secretary of Defense (Installations & Environment) orally concurred with the information in our report and provided technical comments that we incorporated as appropriate. We performed our work at the Office of the Secretary of Defense and the headquarters of each military service. We also visited each military installation that conducts recruit basic training—Fort Jackson, South Carolina; Fort Benning, Georgia; Fort Knox, Kentucky; Fort Leonard Wood, Missouri; Fort Sill Oklahoma; Great Lakes Naval Training Center, Illinois; Lackland Air Force Base, Texas; Marine Corps Recruit Deport, Parris Island, South Carolina; Marine Corps Recruit Depot, San Diego, California; and Camp Pendleton, California. In discussing recruit barracks, we included barracks used to house recruits attending the Army’s One Station Unit Training. This training, which is conducted at select basic training locations for recruits interested in specific military occupational specialties, combines basic training with advanced individual training into one continuous course. To assess the physical condition of recruit barracks, we reviewed the fiscal year 2000 and 2001 installation readiness reports and supporting documentation for the ten installations that conduct basic training. We also toured several barracks at each installation and photographed conditions of the barracks. Finally, we interviewed officials at the services’ headquarters and each installation regarding the process used to inspect facilities, collect information to support the condition rating, and the underlying reasons for the current condition of the facilities. To determine the services’ plans to sustain and recapitalize recruit barracks, we reviewed the services’ plans for renovating its existing barracks and constructing new barracks. In addition, we interviewed officials in the headquarters of each service responsible for managing installations and programming operation and maintenance and military construction funds. We conducted our work from March through May 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office and Management and Budget. In addition, the report will available at no charge on GAO’s Web site at www.gao.gov and to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions regarding this report. Key contributors to this report were Michael Kennedy, James Reifsnyder, Richard Meeks, Laura Talbott, and R.K. Wild. The military services conduct recruit basic training at ten installations in the United States. The Army has the most locations—five, with Fort Jackson, South Carolina, training the most Army recruits. The Marine Corps conducts its training at two primary locations—Parris Island, South Carolina on the east coast and San Diego in the west. Further, about 4 weeks (consisting of weapons qualification and field training exercises) of the Marine Corps’ 12-week basic training course at San Diego is conducted at Camp Pendleton because of training space limitations at its San Diego location. The Navy and Air Force conduct their basic training at one location each—Great Lakes, Illinois, and Lackland Air Force Base in San Antonio, Texas, respectively. Under DOD’s installation readiness reporting system, military installation facilities are grouped into nine separate facility classes. Recruit barracks are part of the “community and housing” facility class. Figure 9 depicts the fiscal year 2001 C-ratings for each of the nine facility classes, as well as for the recruit barracks component of the “community and housing” facility class, at each basic training location. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | The Department of Defense reports that is has been faced with difficulties adequately maintaining its facilities to meet mission requirements. Facilities have been aging and deteriorating as funds needed to sustain and recapitalize the facilities have fallen short of requirements. GAO's review of the services' condition assessments in conjunction with visits to the basic training locations showed that most barracks were in need of significant repair, although some barracks were in better condition than others. GAO found that the exteriors of each service's barracks were generally in good condition and presented an acceptable appearance, but the barracks' infrastructure often had persistent repair problems because of inadequate maintenance. The services' approaches to recapitalize their recruit barracks vary and are influenced by their overall priorities to improve all facilities. Although the Navy, Air Force, and Marine Corps are addressing many of their recapitalization needs in the near-term, most of the Army's plans are longer term. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
IRS is responsible for administering our nation’s voluntary tax system in a fair and efficient manner. To do so, IRS has a staff of about 115,000 employees who work at hundreds of locations in the United States and in several foreign countries. These employees (1) process over 200 million tax returns each year, (2) examine returns to determine whether additional taxes are owed, (3) collect delinquent taxes, and (4) investigate civil and criminal violations of the tax laws. To aid in carrying out these responsibilities, Congress has provided IRS with a broad set of discretionary enforcement powers. These enforcement powers include (1) examining taxpayers’ returns and assessing additional tax, interest, and penalties for underreported income or failure to file a return, (2) enforcing the collection of unpaid taxes by such actions as seizing taxpayers’ property, and (3) conducting criminal investigations of taxpayers and recommending prosecution for violations of the tax laws. In fiscal year 1992, IRS examined over 1 million individual taxpayers’ returns, took about 4.7 million enforced collection actions for delinquent taxes, and initiated over 6,000 criminal investigations. Each of these actions had the potential to create an adversarial relationship between the affected taxpayers and IRS staff. In 1988, concerned about allegations of taxpayer abuse, Congress enacted the Taxpayer Bill of Rights, a law containing numerous provisions to strengthen and clarify taxpayers’ rights in their dealings with IRS. In 1992, additional taxpayers’ rights legislation, identified as “Taxpayer Bill of Rights 2,” was passed by Congress as part of broader tax legislation but was not signed into law by the President. Very similar legislation, still identified as Taxpayer Bill of Rights 2, was introduced in the 103rd Congress as S. 542 and H.R. 22. In addition, some provisions of H.R. 22 were included in H.R. 3419, introduced in November 1993. As of September 1994, Congress had not passed these bills. At the outset, we learned that IRS has a wide range of controls and procedures to govern its relationships with taxpayers. But IRS has neither a specific definition of nor management information on the nature and extent of taxpayer abuse. Thus, it was not possible to select a representative sample of IRS actions to determine if taxpayer abuse had occurred and, if so, to estimate how frequently or attempt to determine if there were patterns of abuse in the many IRS divisions and offices throughout the country. Given the lack of an IRS definition of taxpayer abuse, we found it necessary to develop our own. On the basis of interviews with IRS officials and representatives of tax practitioners and taxpayer advocate organizations, we developed a definition of abuse that encompassed a broad range of situations potentially harmful to taxpayers. We attempted to define abuse from the taxpayer’s point of view, not from IRS’ viewpoint. Therefore, we defined it to include situations in which taxpayers were, or perceived they were, harmed when (1) an IRS employee violated a law, regulation, or IRS’ Rules of Conduct; (2) an IRS employee was unnecessarily aggressive in applying discretionary enforcement power; or (3) IRS’ information systems broke down. By “harmed” we meant primarily financial harm. But, we also recognized and incorporated into our definition the fact that frustration and the resulting burden arising from lengthy delays in resolving problems, time spent in dealing with IRS, and fear of the IRS can be factors in taxpayers’ situations that may contribute to their perception of abuse even though—from IRS’ perspective—the taxpayer may not have been abused. Next, we identified the controls and related measures IRS uses to prevent instances that would meet our definition of taxpayer abuse and to respond to allegations of such instances occurring. We also researched various IRS data sources and focused on Problem Resolution Program files, congressional correspondence files, and internal audit and internal security reports and files to find possible examples of abuse that would fall within our definition. We judgmentally selected 26 such examples and used them to analyze the effectiveness of IRS’ controls and processes to prevent such abuse. While we did not follow up on all 26 examples to determine whether taxpayers were actually harmed by IRS, we cited the circumstances of these examples in our discussions with IRS managers to learn the range of controls in place that should have prevented these circumstances from occurring. We selected these examples without regard to when the incidents occurred, resulting in examples spanning the period 1987 through 1993. However, we evaluated the controls that were in place during the period of our review, from April 1992 to January 1994. To illustrate our approach, we found an example in which an IRS employee, after accepting a cash payment from a taxpayer, stole the cash payment and falsified the document used to credit the taxpayer’s account. This led us to review the adequacy of IRS’ controls over taxpayers’ cash payments. Our review of the controls then led us to a conclusion that they could be strengthened and a recommendation about what should be done. During our review, an allegation of potential taxpayer abuse received considerable media attention because it involved reports of possible improper contacts with IRS by staff of the White House and the Federal Bureau of Investigation (FBI). We included an analysis of both the allegation and the adequacy of IRS’ controls to deal with such contacts in our report. The details of our objectives, scope, and methodology are discussed in appendix I. Appendix II provides a detailed description of IRS’ controls, processes, and oversight offices, as well as recent congressional and IRS initiatives that govern IRS’ interaction with taxpayers. Appendix III provides a summary of the provisions in the 1988 Taxpayer Bill of Rights. Appendix IV is a summary of GAO products that cover issues related to those discussed in this report. The Acting Commissioner of Internal Revenue provided written comments on a draft of this report. Those comments are presented and evaluated on pages 21 to 26 and are reprinted in appendix V. IRS has a wide range of controls, processes, and oversight offices designed to govern how its employees interact with taxpayers. Specifically, IRS has operational controls governing examination, collection, and criminal investigation activities to prevent taxpayer abuse. IRS also has a Problem Resolution Office to handle taxpayer complaints, if a taxpayer feels that these operational controls have broken down. In addition, IRS’ Internal Security Division investigates taxpayer complaints involving potential criminal misconduct by IRS employees. In recent years, legislation and IRS initiatives have aided taxpayers in dealing with IRS. In 1988, Congress passed the Taxpayer Bill of Rights (P.L. 100-647) containing numerous provisions that expanded taxpayer rights. IRS has begun quality management, ethics and integrity, and tax systems modernization initiatives, as well as a limited collection appeals project. And, a key element of IRS’ current strategy is emphasis on treating taxpayers as “customers.” All of these initiatives should help IRS to better serve taxpayers and to prevent their mistreatment. Despite IRS’ efforts to prevent violations of taxpayers’ rights, we found various instances of what we consider to be taxpayer abuse by IRS. Some instances involved situations in which IRS employees violated either the law or IRS’ Rules of Conduct and the taxpayer abuse may have been intentional. Other instances involved situations in which IRS employees violated neither the law nor a regulation, but used discretionary enforcement power in a way that appeared to unnecessarily create a financial or other hardship for the taxpayers. Still others involved IRS computer system problems that engaged taxpayers in lengthy efforts to resolve their tax problems, leaving them with the perception that they were abused by IRS. The following sections of this report discuss (1) the need for better information to aid in protecting taxpayers’ rights and (2) the specific areas where we believe IRS’ controls can be strengthened. Although IRS collects data on taxpayer complaints, it has neither a definition of nor management information for tracking and measuring taxpayer abuse. As a result, IRS is unable to determine the nature and extent of abuse by its employees or systems, and whether existing controls need to be strengthened. A specific definition of taxpayer abuse is essential to provide a basis for collecting consistent information about it and to assist IRS staff in identifying abuse when it occurs and preventing its reoccurrence. IRS has several management information systems that collect data on taxpayer complaints. Complaints handled by IRS’ Problem Resolution Program or investigated by its Internal Security Division are entered into their respective management information systems. IRS’ Labor Relations Division also has a management information system that includes the results of investigations of IRS employees and indicates any disciplinary actions taken against them, including those investigations that may have originated from taxpayer complaints. Each of these management information systems uses codes to track and measure various issues considered important to the respective offices, but none of them has a specific code for taxpayer abuse. For example, the Labor Relations system tracks such issues as criminal misconduct and misuse of authority by IRS employees. In some instances these particular issues may involve taxpayer abuse, but in other instances they do not. We found similar situations with both the Problem Resolution Program and Internal Security management information systems. Without a definition of taxpayer abuse and specific codes related to that definition, these systems are not currently able to record incidents of abuse to track their nature and extent. To better ensure that violations of taxpayers’ rights are minimized, we believe that IRS should establish a service-wide definition of taxpayer abuse and then identify and gather management information to systematically track its nature and extent. Although this may require IRS to modify some of its existing data bases, we believe that this can be accomplished without requiring additional appropriations. IRS is currently involved in an effort to develop broad-based performance indicators to allow top IRS, Treasury, other administration officials, Congress, and the public to better assess its performance in key areas. Developing the information needed to assess performance in controlling taxpayer abuse would seem to fit well into that effort. Taxpayer surveys IRS has conducted in recent years are another potential source of information about taxpayer abuse. As discussed in appendix II, these surveys have collected information from taxpayers about their views on how they were treated by IRS representatives. These surveys have not, however, included questions designed to identify possible abusive incidents for further analysis. Once IRS has defined and is systematically tracking abuse, these types of surveys could be used as another indicator of IRS’ progress. Public Law, Treasury Directives, and Internal Revenue Manual guidelines require that IRS protect the integrity, availability, and privacy of taxpayer information in its computer systems. Consequently, IRS employees are prohibited from obtaining access to taxpayer accounts without authorization. The Integrated Data Retrieval System (IDRS) is IRS’ primary computer system for accessing and adjusting taxpayer accounts. Authorized IRS staff obtain access to taxpayer information through IDRS terminals located at the service centers and the regional and district offices. There are approximately 56,000 staff nationwide authorized to use IDRS. Eventually, IRS plans to replace IDRS as part of its TSM initiative. According to IRS, under the new system, users will be able to obtain more taxpayer information than they can through IDRS. IRS has procedures and controls in place to aid in preventing and detecting unauthorized access and use of taxpayer information contained in IDRS. Specifically, each IDRS user is given a unique password that allows access to the system. Users are also assigned a profile of command codes—codes that, among other things, enable users to make changes in taxpayers’ accounts—based on the user’s job requirements. The profile limits the user to only those command codes needed to do his or her job effectively. IDRS also provides a means to identify all employees who access taxpayer accounts, as IDRS records each employee access of taxpayer information in a daily audit trail. IRS can search these audit trails to investigate specific allegations of unauthorized access, as well as to look for patterns of use that could indicate unauthorized access. In addition, IDRS automatically generates security reports when employees access their own accounts, their spouses’ accounts, or the accounts of other employees. Each IRS office has security personnel who are responsible for monitoring all IDRS activities, including monitoring security reports, adding and removing IDRS users, and assigning profiles for IDRS users. We learned through discussions with IRS Internal Audit staff and a review of an October 1992 Internal Audit report that these controls and procedures provide IRS with limited capability to (1) prevent employees from unauthorized access to taxpayers’ accounts and (2) detect an unauthorized access once it occurs. Even though IRS employees can access IDRS only with a password, once in the system, they cannot be prevented from accessing the account of any taxpayer living within their service center area. Furthermore, even though IDRS records every employee access of IDRS in its daily audit trail, these audit trails are so voluminous and detailed that they cannot be used efficiently to identify inappropriate access and misuse of IDRS information. In addition to these weaknesses, the security reports monitored by security personnel are not adequate to help them identify potential browsing, disclosure, or other integrity problems. Finally, according to the Internal Audit report, “. . . the IDRS Security Handbook and related training materials do not provide proper guidance to security personnel on how to detect potential employee misuse of IDRS.” In one of our examples of alleged abuse, an IRS employee, after a personal dispute with a contractor, gained access to the contractor’s account without authorization. The employee then allegedly used this information to threaten the contractor with enforcement action in an effort to favorably resolve the dispute. Because of the weaknesses in IDRS security as described above, the unauthorized access to the contractor’s account described in this example would not automatically have been detected by security personnel. Rather, it was only because the taxpayer complained that IRS management was made aware of this specific instance of taxpayer abuse. IRS management is aware of its overall problems with IDRS security because of the Internal Audit report mentioned above. According to the report, 368 IRS employees in one region had used IDRS to gain access to nonwork-related taxpayer accounts, including those of friends, relatives, neighbors, and celebrities. In most instances, the access did not result in changes to taxpayer’s accounts, but rather enabled the IRS employees to merely view the taxpayer’s account information. Ultimately, information on 79 employees was referred to Internal Security for investigation of potential criminal violations. Internal Security determined that six employees prepared fraudulent returns for taxpayers and then monitored the accounts on IDRS. The actions of some of these employees are being reviewed by the appropriate U.S. Attorney for potential criminal prosecution. On the basis of these findings, Internal Audit recommended that IRS management take actions to strengthen existing IDRS security controls. Internal Audit recommended seven steps to enhance security controls over IDRS, one of which was to ensure that the security system for TSM will have similar controls to those recommended for the current IDRS security system. We also discussed these problems in a September 1993 report that recommended several actions IRS needs to take to strengthen its general controls over computerized information systems. We and IRS are continuing to study ways to solve these problems. IRS is currently working on a program to help detect unauthorized access to IDRS. Specifically, the goal is to implement standardized IDRS reviews periodically in each service center. To prevent unauthorized access to taxpayer accounts, IRS wants to limit some employees’ access to only specified accounts authorized by a manager for official purposes. IRS has also indicated that it plans to build security controls to minimize unauthorized access of taxpayer information into the system that will eventually replace IDRS. Although IRS has yet to develop a cost/benefit analysis for these security controls, IRS officials said that the cost of these controls will be included in future requests for TSM appropriations. When selecting taxpayers’ returns for examination, IRS often uses computer-generated lists to identify returns with examination potential. However, because computer-aided selection techniques rely solely on information in filed returns, IRS collects information from outside sources to identify other areas of potential taxpayer noncompliance. Information Gathering Projects (IGP) are one technique that IRS uses to collect outside information and to identify returns with examination potential. In fiscal years 1990 and 1991, district office examinations of individual taxpayers resulting from IGPs were about 4.5 percent of the total of such examinations. An IGP is a study or survey undertaken to identify noncompliance with the tax laws. It usually involves a limited number of taxpayers within such categories as an occupation, an industry, a geographic area, or a specific economic activity. IRS requires that an IGP be authorized by a district director or higher level management official for a specified length of time during which specific tax-related information is to be collected from third party sources. Once authorized, IGPs normally include an information gathering phase and an examination phase. During the information gathering phase, a project team—revenue agents and a project coordinator—collect and analyze information on a particular group of taxpayers. On the basis of this analysis, the project team will identify tax returns that have potential for tax changes and therefore should be examined during the project. Examination staff then review the returns to identify those with the greatest potential for tax changes. The returns selected will then be sent to an examination group designated to conduct the examinations. Although IRS procedures provide general guidelines for identifying, approving, initiating, and coordinating IGPs, the controls and procedures are not adequate to prevent examination staff from selectively targeting individual taxpayers for examination. For example, although IRS requires project coordinators to develop general work plans for each IGP, there is no requirement in IRS’ procedures that specific criteria be established for selecting tax returns to be examined during the project. Furthermore, IRS’ procedures do not require a separation of duties—a key examination control against potential abuse—between project staff responsible for identifying potential returns to be included in the project and staff responsible for selecting the tax returns to be examined. As a result, an examination employee working on the project could be involved in (1) the project’s information gathering phase, which results in the selection of a group of tax returns that have potential for tax changes and (2) selecting those returns from that group believed to have the greatest potential for tax changes, which will be examined. This makes it possible for such an employee to selectively target an individual taxpayer for examination during the project. In one of our examples, a revenue agent working on an IGP included the returns of two taxpayers for examination against whom the revenue agent had initiated legal action stemming from a personal business dispute. IRS is currently implementing Compliance 2000, an initiative designed to increase taxpayer compliance by (1) identifying market segments believed to be in noncompliance, (2) determining the reasons for such noncompliance, and (3) improving taxpayer compliance using assistance and education methods before initiating more traditional enforcement methods. According to IRS officials, as IRS implements Compliance 2000, it will likely increase the use of special enforcement projects and, therefore, increase the number of returns selected for examination using locally-derived and possibly subjective criteria, such as those used during IGPs. To help ensure that taxpayers are not improperly targeted for examination by IRS employees during IGPs, we believe that IRS should revise its guidelines to require that specific criteria be established for selecting taxpayers’ returns to be examined during these projects. We also believe there should be a separation of duties between project staff who identify returns with potential for tax changes, and staff who select the returns to be examined. Since these are basically procedural changes, we do not believe that IRS would incur substantial costs in implementing them. IRS officials told us that IRS prefers that taxpayers settle their tax bills with a check or money order. However, IRS is required by law to accept cash if a taxpayer insists on this method of payment. When a taxpayer pays with cash, an IRS collection employee is required to provide the taxpayer with a cash receipt—IRS Form 809. At the end of each day, collection support staff are to process the payments and reconcile all Form 809 receipts they receive with daily collection activity reports submitted to them by collection staff. In addition to the daily reconciliation, collection managers are to do an annual reconciliation of all Form 809 receipts issued to collection staff to ensure that all receipts are accounted for. Any discrepancies noted during either the daily or annual reconciliations are to be discussed by the appropriate collection employee and his or her supervisor. We found that IRS did not consistently mention its preference for tax payments by check or money order in its forms, notices, and publications. For example, IRS Publication 594 “Understanding the Collection Process” says that taxpayers must receive an IRS Form 809 receipt for cash payments to the IRS, but does not say that IRS prefers either a check or money order. We also found that the controls to prevent IRS employees from embezzling taxpayers’ cash payments relied to a great extent on employee integrity and taxpayer complaints. Although Form 809 receipts provided to taxpayers are to be reconciled with daily collection reports, there are no management reviews of all Form 809 receipts other than the annual reconciliation. As a result, if a collection employee embezzled a taxpayer’s cash payment and the embezzlement was not detected through the daily reconciliation, IRS might not detect this until the next annual reconciliation. In the interim, IRS relies on taxpayer complaints to identify when employees embezzle taxpayers’ cash remittances. In one of our examples, we found that a taxpayer complained to IRS that her bank account was levied after she fully paid her tax liability with cash. Internal Security investigated her complaint and determined that the IRS collection employee whom she paid had embezzled most of her cash payment by altering the amount on the cash receipt he submitted to the collection support staff. This employee also embezzled other taxpayers’ cash payments for which he had not submitted any cash receipts. Unfortunately for the taxpayer in this example, the situation was not detected until the taxpayer complained about the erroneous bank account levy made by IRS. Reconciling outstanding cash receipts more often may have detected this problem before the taxpayer was subjected to the additional IRS collection action. To better protect against possible embezzlement of cash payments, we believe that IRS should reconcile all outstanding Form 809 cash receipts more often than once a year. We also believe that IRS should consistently stress in its forms, notices, and publications that taxpayers should use checks or money orders whenever possible, rather than cash to pay their tax bills. In our view, IRS could implement these changes at minimal cost, as they are basically procedural changes and modifications to existing forms and publications. When businesses fail to collect or pay withheld income, employment, or excise taxes, IRS may assess a trust fund recovery penalty against the responsible officers and employees. This penalty amounts to 100 percent of the unpaid taxes. IRS may also charge interest from the date the penalty was assessed. In determining who should be assessed the penalty, IRS is required to show that the employee being assessed was responsible for and willfully failed to collect or pay the taxes to IRS. Although IRS may assess the penalty against all responsible officers and employees, it is to collect only the amount of tax owed. That is, if taxes owed amount to $100, IRS may hold various company officials responsible, but it is to collect no more than $100 (plus interest) in total from these officials. We reported on IRS’ process for collecting 100-percent penalties in August 1989. Relatively large trust fund recovery penalties have caused financial hardships for the individuals involved. Some individuals have complained that they were wrongfully assessed the penalty and then required by IRS to show why they were not liable for the penalty. In one of the cases we reviewed, a bookkeeper for a company that had declared bankruptcy was assessed penalties and interest on the business’s unpaid taxes. After long and exhaustive proceedings, the state tax agency determined that the bookkeeper was not an operating officer and did not owe the state penalty. Nonetheless, IRS continued to pursue the bookkeeper for payment of the federal penalty. Six months later, with the help of his Congressman, the bookkeeper convinced IRS that he was not responsible for paying the trust fund taxes. Some responsible employees may not be aware that they could be assessed the penalty if they fail to ensure that the taxes are paid to IRS. Moreover, under current law—Internal Revenue Code Section 6103—IRS is prohibited from disclosing to a responsible person the names of other responsible persons held liable for the penalty and the general nature of collection actions taken against them. IRS has recognized weaknesses in its controls and procedures for identifying the responsible person for this type of penalty. As a result, IRS instituted policy changes aimed at ensuring that responsibility for paying the penalty remained with the responsible person. The revised policy requires IRS managers to ensure that their staffs conduct quality investigations to identify responsible persons and prove willful intent. Taxpayer rights legislation introduced in Congress in 1992 and 1993 contained provisions that, if enacted, would assist individuals in getting information about the trust fund recovery penalty. The bills would require IRS to increase awareness of the penalty through special information packets and printed warnings on tax documents. The bills would also allow each individual assessed the penalty to find out from IRS the names of others against whom IRS had assessed the penalty. Also, the bills would allow these assessed individuals to find out the nature of any collection actions being taken against the other assessed individuals so that all involved parties would have complete information with which to deal with IRS and each other. We support the intent of this provision of the proposed legislation. To help responsible officials and employees become more aware of their responsibilities to collect and forward trust fund taxes to IRS, we believe that IRS should provide better information about their responsibilities and the penalty for failure to meet these responsibilities by providing special information packets. IRS is already implementing changes to its trust fund recovery penalty assessment process, which will remedy some of these problems. As a result, we do not believe that IRS would incur significant costs to implement the additional changes. We found examples of situations in which taxpayers repeatedly received tax deficiency notices and payment demands despite continual contacts with IRS over a period of months and even years in an attempt to resolve problems with their accounts. IRS’ inability to correct the underlying problems in such situations resulted in taxpayers feeling frustrated. In these instances, although no IRS employee appeared to have intentionally abused them, the taxpayers’ correspondence with IRS indicated they felt they were abused by the “tax system.” In one instance, a taxpayer required intervention from her Senator to prevent IRS taking more than $50,000 to pay for taxes on a sale of property that the taxpayer had not owned or sold. The problem arose because two taxpayers had the same social security number and the same name. Initially, IRS released the levy it had placed on the taxpayer’s salary to allow her time to prove that she was not the seller of the property. Although the taxpayer tried to resolve the problem by obtaining a letter from the Social Security Administration explaining the problem with the duplicate social security number and same name, IRS would not accept the letter as proof of who sold the property. The taxpayer’s efforts to resolve the problem by working with the bank that had handled the property sale also failed. Finally, the taxpayer contacted her Senator and eventually was able to get the levy released. In another instance, a taxpayer who promptly paid an additional tax assessment in early 1991 got help from his Senator to get IRS to acknowledge that he had paid his assessment in a timely manner. Soon after the taxpayer sent his payment to IRS, it sent the taxpayer a check in an amount very close to the amount he had originally sent IRS. Later, IRS wrote the taxpayer, asking payment for the original tax assessment and adding a penalty for late payment. Correspondence continued for months back and forth between the taxpayer and IRS. Finally, in early 1992, nearly a year after the taxpayer had made his payment, the matter was resolved with IRS noting that the problem occurred because the taxpayer’s payment was posted to his account before the additional tax assessment had been recorded. A more general type of problem affects divorced or separated spouses. Divorced or separated taxpayers who had previously filed joint returns may subsequently be assessed a tax deficiency. In these instances, IRS’ procedure is to send notices of deficiency to the last known address of the spouse whose name and social security number appeared first on the joint return. Once enforcement action begins, the other spouse may be subjected to such actions as a levy on his or her salary without having been informed by IRS of the tax delinquency. IRS’ procedures require that duplicate notices of deficiency be sent by certified or registered mail to each spouse, if the spouses notify IRS that separate residences have been established. However, IRS’ computer system is not capable of searching taxpayer files each time a notice of deficiency is issued for a joint return to determine whether spouses have subsequently filed separate returns with new addresses or otherwise provided separate addresses. IRS Problem Resolution Program officials in IRS’ Southeast Region told us they frequently became involved in situations where a separated or divorced taxpayer, typically a woman, says that the first notice she received for a joint return deficiency was a notice of lien or levy on her property. In a February 1992 congressional hearing on S. 2239, Taxpayer Bill of Rights 2, Treasury’s Assistant Secretary for Tax Policy said that IRS would begin sending a notice of deficiency to both parties in such situations “. . . as soon as modernization of its computer system makes it feasible to do so.” More recently, IRS Problem Resolution staff told us that IRS’ TSM program will improve existing computer capabilities and make it possible for IRS to begin providing notices to both parties. The three examples discussed above, and others we have reviewed, have the common thread of occurring and continuing primarily because of information handling problems. We believe that IRS’ implementation of the various elements of TSM, together with IRS’ emphasis on improving operations and providing better service to taxpayers, should go a long way toward eliminating these types of problems. With adequate controls to guard against misuse, TSM should make taxpayer information more accurate and more readily available to IRS employees and, consequently, should increase IRS’ ability to help taxpayers resolve their problems. However, TSM is a massive, long-term effort, extending into the next century, so it may be some time before the technological capability to resolve these problems is in place. Given that, we believe IRS needs to do as much as it can to identify possible interim solutions and to assure that TSM deals with these problems. First, IRS can systematically identify, inventory, and categorize the various kinds of information handling problems that lead to taxpayer frustration and perceptions of abuse. Analysis of these data in connection with IRS’ operational improvement efforts may help identify some short term remedies. Second, IRS can use the data in its current operational improvement effort to define TSM business requirements to make sure that TSM has the capabilities needed to deal with these types of problems. We recently testified about the need for IRS to define its business requirements for TSM in detail. Carrying out these steps would require some analytical resources but, since the steps are consistent with TSM and operational improvement efforts already underway, we do not believe substantial incremental costs would be incurred. IRS controls for dealing with third party contacts that provide information on possible tax violations call for the information to be referred to the appropriate IRS unit for evaluation as to what action, if any, to take. For example, if someone contacts IRS with information that a taxpayer has not reported a substantial amount of his or her income and suggests that an audit could be warranted, that information would be referred to the Examination Division in the IRS field office that has jurisdiction. Examination staff would then evaluate the information for credibility and specificity, including reviewing the taxpayer’s return—assuming one was filed—to see if there were indications of underreporting as part of the decision on whether to examine the taxpayer’s return. Since IRS’ National Office is prohibited from initiating an examination, field office managers make final decisions in such cases. IRS has specific procedures to handle requests from the White House for matters such as preparing tax check reports on prospective appointees, but there are no specific procedures to handle a White House contact offering information about potential tax violations. According to IRS officials, such information would be handled in the same manner as any other third party communication in that it would be evaluated for potential tax examination and/or criminal investigation purposes by Examination Division or Criminal Investigation Division staff. In May 1993, the White House announced that seven employees of the White House Travel Office had been fired because of concerns about the office’s management and financial integrity. (These and related issues are discussed in detail in our report entitled White House Travel Office Operations (GAO/GGD-94-132, May 2, 1994). Soon after, related allegations arose that the White House and/or the FBI made improper contacts with IRS, resulting in improper IRS contacts with a taxpayer. These allegations have been reviewed by three organizations. A White House team, led by the former Chief of Staff to the President, reported that there was no evidence of White House contact with IRS in connection with the Travel Office issue. The IRS Inspection Service investigated the allegations involving IRS and concluded that no White House contact had been made with IRS concerning this matter and that IRS employees had carried out their duties properly. Although IRS released a heavily edited copy of its report, most of the report cannot be made public because it contains tax return information protected from disclosure by section 6103 of the Internal Revenue Code and the taxpayer declined to grant a waiver from this provision of the law so IRS could comment publicly on this matter. At the request of a Member of Congress, the Office of Inspector General (OIG), Department of the Treasury, also investigated the allegations involving IRS. The OIG report was issued on March 31, 1994. The OIG, in its report, also concluded that the White House had not contacted IRS about the Travel Office matter and that it found no evidence of taxpayer abuse by IRS employees. Disclosure of tax return information in the OIG’s report also was limited by section 6103. We reviewed the three reports and supporting documentation and discussed their findings with representatives of the three organizations. We also interviewed key White House, IRS, and FBI personnel involved in the events leading up to the allegations of abuse by IRS. Finally, we interviewed representatives of the taxpayer involved. On the basis of our review, we believe that (1) neither the White House nor the FBI made improper contact with IRS, (2) IRS employees carried out their duties properly and in accordance with IRS guidelines and procedures, and (3) abuse did not occur. Section 6103 provides us with access to tax return information to enable us to carry out our work, but it also limits the information we may disclose. Thus, we are not able to provide the details of our review in this report. In July 1993, the White House Counsel issued guidance to White House staff on contacts with the FBI and the IRS, which supplemented guidelines issued earlier in the year. The July guidelines stated that “It is never appropriate for White House personnel to initiate an investigation or audit by directly contacting the Internal Revenue Service.” The guidelines further provided that any information about possible violations of law or wrongful activities were to be communicated by White House staff to the Counsel to the President, who would decide whether the information should be provided to senior Justice or Treasury Department officials. As noted above, IRS has specific procedures for handling White House contacts about tax checks for appointees and for other administrative matters, and general procedures for handling third-party contacts from any source offering information that may lead to examinations or investigations. IRS does not, however, have specific procedures to deal with a White House contact offering information about possible tax violations. We emphasize that we found no evidence of taxpayer abuse in this situation. However, we believe IRS can expand its procedures by adding guidance to its employees on how to handle White House contacts other than those involving tax checks and routine administrative matters. Developing and issuing such guidance should not impose any significant incremental costs on IRS. IRS has a wide range of controls, processes, and oversight offices designed to govern how its employees interact with taxpayers. While this “system” of controls has many elements designed to protect taxpayers from abuse, including IRS’ initiatives and numerous protections provided by law, it lacks the key element of timely and accurate information about when, where, how often, and under what circumstances taxpayer abuse occurs. This information would greatly enhance IRS’ ability to pull together its various efforts to deal with abuse into a more effective system for minimizing it. The information would also be valuable to Congress and taxpayers in general in assessing IRS’ progress in treating taxpayers as customers—an often cited IRS goal. Therefore, we believe IRS should define taxpayer abuse and develop the management information needed to identify its nature and extent. In addition, we believe IRS can strengthen its controls in several specific areas and provide additional information to taxpayers that will increase their ability to protect their rights. Specifically, we believe IRS can (1) ensure that the information systems now being developed under its TSM initiative include the capability to minimize unauthorized access to taxpayer information, (2) clarify its guidelines for selecting tax returns during IGPs, (3) reconcile its cash receipts more often and encourage taxpayers to avoid using cash whenever possible in making payments to IRS, (4) provide individuals who may be subject to trust fund recovery penalties with more information about their responsibilities, (5) attempt to identify short-term remedies to minimize the problems caused taxpayers by IRS’ information handling weaknesses and ensure that the TSM program includes requirements designed to solve those problems as the new information systems are implemented over the next several years, and (6) develop specific guidance for IRS employees on how they are to handle White House contacts. Finally, we believe that legislation is needed to provide IRS with authority to disclose information to all responsible officers involved in IRS efforts to collect a trust fund recovery penalty. This authority was included in legislation titled Taxpayer Bill of Rights 2, (S. 542 and H.R. 22) introduced in the 103rd Congress. We do not believe that Congress needs to provide additional appropriations to enable IRS to implement these recommendations, with one possible exception. Although additional funding may be needed so that IRS can deal with the information management problems discussed in this report as it proceeds with the TSM program, IRS does not know the amount of funds that will be needed because it has yet to decide on specific requirements and develop a cost/benefit analysis for these requirements. Any funding needed should be included in budget requests for IRS’ TSM program. We believe that the steps we are recommending to correct the remaining problems will not require additional appropriations. To improve IRS’ ability to manage its interactions with taxpayers, we recommend that the Commissioner of Internal Revenue establish a service-wide definition of taxpayer abuse or mistreatment and identify and gather the management information needed to systematically track its nature and extent. To strengthen controls for preventing taxpayer abuse within certain areas of IRS operations, we recommend that the Commissioner of Internal Revenue ensure that TSM provides the capability to minimize unauthorized employee access to taxpayer information in the computer system that eventually replaces IDRS; revise the guidelines for IGPs to require that specific criteria be established for selecting taxpayers’ returns to be examined during each project and to require that there is a separation of duties between staff who identify returns with potential for tax changes and staff who select the returns to be examined; reconcile all outstanding Form 809 cash receipts more often than once a year, and stress in forms, notices, and publications that taxpayers should use checks or money orders whenever possible to pay their tax bills, rather than cash; better inform taxpayers about their responsibility and potential liability for the trust fund recovery penalty by providing taxpayers with special information packets; seek ways to alleviate taxpayers’ frustration in the short-term by analyzing the most prevalent kinds of information-handling problems and ensuring that requirements now being developed for TSM information systems provide for long-term solutions to those problems; and provide specific guidance for IRS employees on how they should handle White House contacts other than those involving tax checks of potential appointees or routine administrative matters. To better enable taxpayers and IRS to resolve trust fund liabilities, we recommend that Congress amend the Internal Revenue Code to allow IRS to provide information to all responsible officers regarding its efforts to collect the trust fund recovery penalty from other responsible officers. The Acting Commissioner of Internal Revenue commented on a draft of this report by letter dated August 26, 1994. (See app. V.) We also discussed the draft report several times with IRS officials. Our evaluation of IRS’ written comments on our proposed recommendations in the draft report follows. IRS disagreed with our recommendation that it establish a definition of taxpayer abuse and identify and gather the information needed to systematically track the nature and extent of such incidents. IRS said use of the term “taxpayer abuse” was misleading, inaccurate, and inflammatory; disagreed with parts of the definition of abuse used in our study; challenged the assumption that there was any need to collect additional information about abuse because its existing systems already identify and gather sufficient information to track and manage cases of improper treatment of taxpayers; suggested that our methodology was flawed because it did not show a statistically significant frequency of abuse; and asserted that the problem, to the extent it exists, was well under control. In summary, IRS said that the problem of taxpayer abuse, to the extent that it exists, is best defined, monitored, and corrected within the context of its definitions and current management information systems. Consequently, IRS planned no action on our recommendation. IRS’ disagreement with our definition of taxpayer abuse centered on two of the three components we used to define this issue in the absence of an IRS definition. While agreeing that taxpayers can be abused when IRS employees violate laws, regulations, or rules of conduct, IRS did not agree that harm resulting from employees aggressively applying discretionary enforcement power or information system breakdowns constituted taxpayer abuse. We believe that it is commendable when IRS employees aggressively respond to taxpayers who do not comply with the tax laws, particularly if the noncompliance appears to be intentional. However, we noted instances when taxpayers who may not have complied because they did not understand the tax laws also received aggressive—perhaps overly aggressive—treatment by IRS employees. Throughout our study, it was our intent to focus on these latter instances. We have clarified our definition to explicitly specify unnecessarily aggressive application of discretionary enforcement power. We also noted instances when taxpayers were thoroughly frustrated due to the time and cost they had to expend in order to resolve misunderstandings resulting from IRS information handling problems. In both types of situations, we can understand why taxpayers would feel abused by IRS even though there was no violation of laws, regulations, or rules of conduct. Another area in which we and IRS disagree is whether mistreatment of taxpayers, whatever its frequency and whether intentional or not, is an issue of sufficient significance to merit specific management attention based on systematic information gathering, reporting, and tracking over time. IRS clearly believes it is not unless it can be shown that the problem is statistically significant relative to the total number of IRS contacts with the public. IRS argues in its comments that (1) our study did not show that abuse, as we defined it, occurred with statistically verifiable frequency; and (2) other IRS information gathering activities give IRS management sufficient information to track these situations. In other words, IRS said that we have not shown that there is a significant problem, but if there is, IRS believes it has all the information needed to deal with it. We believe the issue of taxpayer mistreatment deserves attention, not because we found it to occur frequently, but because we could not determine how frequently it occurs, and neither can IRS without modifying its existing management information systems. More fundamentally, we believe the issue inherently deserves attention. Congress has provided IRS with broad powers to carry out demanding and difficult responsibilities, but Congress also continues to be concerned about protecting taxpayers from arbitrary or overzealous IRS employees and from administrative systems that sometimes go awry. It does not seem unreasonable to us that IRS should have information available about such incidents for its own use in working to strengthen preventative measures and to be able to report periodically on the issue. It is true that our study does not present a statistical analysis of the incidence of abuse. That is the point. We say early in our report that IRS does not have the information readily available to estimate the frequency of such incidents. Our concern is not that we found a high—or low—frequency of abuse. Our concern is that the information needed to allow either us or IRS to determine the frequency of such incidents and to assess the effectiveness of IRS’ controls to prevent such incidents over time is not presently available. We agree, and our draft report recognized, that IRS has numerous information gathering efforts that collect a great deal of information related to the mistreatment of taxpayers. These include an attempt to measure taxpayer burden, defined as time, cost, and dissatisfaction, through such means as an annual report to the tax committees and periodic customer surveys. We do not agree, however, that these efforts and the management information derived from them, as presently structured, allow IRS to adequately measure and track incidents of taxpayer mistreatment. IRS says, for example, that it has in place definitions and an information system to track and manage cases where IRS employees have violated a law, regulation, or the Office of Government Ethics’ Standards of Ethical Conduct for Employees of the Executive Branch. This system contains information on all cases investigated by IRS’ Internal Security Division, ranging from allegations of violating travel regulations to accepting bribes. While we were able to select some cases out of the system that met our study definition of taxpayer abuse, we found it extremely time consuming and cumbersome because the system is structured to identify employee violations of policies and procedures, rather than to identify cases of abuse or taxpayer mistreatment from the taxpayer’s perspective. In any event, IRS has no definition of taxpayer perception of mistreatment or abuse and the system has no code or category to identify such cases. As a result, although the cases that are entered in this system may involve taxpayer mistreatment, at present no reporting or tracking of such cases can occur. In summary, IRS believes it has adequate information to deal with what it believes are rare instances of taxpayer mistreatment. We do not agree that IRS has adequate information for the reasons noted above. We believe, however, that IRS could readily develop adequate information from its existing management information systems by developing a definition of “taxpayer mistreatment,” or such other term as IRS chooses, and modifying one or more of its present systems to identify incidents with the characteristics called for by the definition. Similarly, IRS could develop questions for use in its customer surveys to serve as indicators of the frequency of taxpayer mistreatment and progress in preventing it. We believe IRS should reconsider its decision not to implement this recommendation. IRS disagreed with a recommendation we made in a draft of this report that it revise its Rules of Conduct to deal with situations that can arise when IRS employees have dealings with taxpayers with whom the employees have recently completed an examination, investigation, or collection enforcement action. IRS said that it believed the Office of Government Ethics’ Standards of Ethical Conduct for Employees of the Executive Branch—which superseded IRS’ and other agencies’ Rules of Conduct—are sufficient to address the issues involved. On the basis of our discussions with IRS ethics officials and Office of Government Ethics officials, we agree and have dropped this recommendation and related material from our final report. IRS’ comments on our other recommendations and our recommendation to Congress, along with our evaluation, are briefly summarized below. IRS agreed with our recommendation to provide the capability to minimize unauthorized employee access to taxpayer information in the new computer systems now being developed. IRS summarized several of the security and privacy capabilities these systems are to provide. In response to our recommendation to revise the guidelines for IGPs, IRS said it would issue a memorandum to the field updating a similar memorandum issued on September 21, 1989. IRS said the guidance would, among other things, address the need for (1) establishing criteria for selecting returns to be examined and (2) for separating duties of employees who identify returns to be included in the project from those who select the specific returns to be examined. While this may serve to temporarily heighten field staff awareness of the importance of this issue, we believe that including such guidance in the Internal Revenue Manual would result in a more permanent emphasis on this issue in light of the potential for greater use of IGPs under Compliance 2000. IRS agreed with our recommendation to reconcile cash receipts more often than once a year and said it would consider doing random and unannounced reconciliations in addition to the annual reconciliations. We believe this is an excellent approach. IRS said that it supported the other part of this recommendation calling for it to emphasize in forms, notices, and publications that taxpayers should, whenever possible, pay their tax bills with checks or money orders instead of cash. In response to our recommendation that IRS better inform taxpayers about their responsibility and potential liability for trust fund recovery penalties, IRS said that it had already done a great deal in this area, including placing warnings on tax deposit coupons, on almost 30 forms, and in publications used by business taxpayers, and does not plan future changes in the coupons because it is moving away from the paper coupons and encouraging electronic payments. IRS did say it would consider using special information packets or taxpayer education materials for small businesses to alert taxpayers to this problem. In response to our recommendation that IRS seek ways to alleviate information-handling problems that frustrate taxpayers, IRS said it continually does this as it gathers data through Quality Review Programs. IRS said that as it moves into TSM’s Document Processing System, the capture of images of returns and other tax documents will improve communications with taxpayers. IRS also said that the Taxpayer Ombudsman’s Problem Resolution Program provides recommendations to the Tax Systems Modernization Program for ways to alleviate systemic problems that cause problems for taxpayers. IRS disagreed with our recommendation that it provide guidance for IRS employees on how they should handle White House contacts, other than those involving tax checks of potential appointees or routine administrative matters. IRS said that its current procedures regarding third-party contacts who provide information that could lead to an audit or investigation are adequate to cover any contacts from the White House. Those procedures essentially call for IRS field office personnel to evaluate the information provided and decide if an audit or investigation is warranted. We continue to believe that IRS and taxpayers would be better served by specific, tailored guidance on this topic. Retaining only the current procedures for all third-party contacts will allow IRS employees to (1) accept any information from any White House staffer suggesting that an IRS audit or investigation be done, whether or not the information was received through the senior level channels prescribed by the White House guidance to its employees and (2) allow that information to be evaluated and a decision made as to whether to conduct an audit or investigation by a relatively low-level IRS employee. IRS supported our recommendation to Congress calling for amending the Internal Revenue Code to allow IRS to inform all of the responsible officers in a business about IRS’ efforts to collect a trust fund recovery penalty from other responsible officers. As agreed with the Subcommittee, we will send copies of this report to other interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. Copies will be made available to others upon request. The major contributors to this report are listed in appendix VI. If you have any questions, please call me at (202) 512-5407. The Subcommittee on Treasury, Postal Service and General Government, House Committee on Appropriations, asked us to determine if IRS has adequate controls and procedures to prevent IRS from abusing taxpayers’ rights. To attempt this determination, we identified various examples of potential taxpayer abuse that were of concern to the public, Congress, and the media. From these examples, we developed a range of taxpayer abuse issues for which we examined IRS’ procedures, guidelines, and management oversight to determine if these controls appeared adequate to protect taxpayers from abuse by IRS employees, procedures, or systems. At the outset of our review, we found that IRS had no definition of taxpayer abuse. We discussed the topic of taxpayer abuse with managers of various IRS offices, including the Collection, Examination, and Criminal Investigation Divisions; the Inspection Service; and the Problem Resolution Office. Although some managers offered their opinions as to what situations might be considered “abusive,” none was aware of any specific IRS definition of taxpayer abuse. To get other perspectives on the issue, we contacted a number of groups representing both tax practitioners and taxpayers. These groups included the American Bar Association, the American Institute of Certified Public Accountants, the Tax Executive Institute, the Federation of Tax Administrators, and the National Coalition of IRS Whistleblowers. As was the case with IRS managers, the officials from these groups did not have a standard definition of taxpayer abuse. However, they raised a number of concerns, centering not only on what they believed to be specific instances of IRS employees’ excessive use of discretionary enforcement power, but also on IRS’ systemic problems, which they felt caused harm to taxpayers in general and which we believe could be perceived by taxpayers to be abusive. To assist our data collection efforts regarding taxpayer abuse, we developed a working definition of abuse that encompassed a broad range of situations that were potentially harmful to taxpayers. We defined abuse from the taxpayers’ viewpoint, rather than from IRS’ viewpoint. We then listed various issues related to specific examples of potential abuse that we identified by reviewing recent congressional hearings and reports, newspaper and magazine articles, IRS Problem Resolution Office files, IRS district office and service center congressional correspondence files, and IRS Internal Audit and Internal Security files and reports. Our working definition of taxpayer abuse had three parts that described general categories of potential taxpayer abuse on the part of IRS and its employees. The three categories, as well as related issues of taxpayer abuse, were as follows: An IRS employee is alleged to have violated a law, regulation, or the IRS rules of conduct, resulting in possible harm to a taxpayer; a related issue is the use of discretionary enforcement power for personal reasons. An IRS employee aggressively uses discretionary enforcement power in such a way that a taxpayer perceives that he or she is harmed, as does the media, Congress, or the general public; related issues include the use of enforcement power against certain persons who, although not directly responsible for a failure to pay a tax liability, may be technically liable for the tax, such as when an innocent spouse is assessed a joint tax liability or when a company employee is assessed a trust fund recovery penalty. An IRS computer system fails in such a way that a taxpayer perceives that he or she is abused, as does the media, Congress, or the general public; a related issue is the use of discretionary enforcement power against a taxpayer because the IRS has mistakenly assessed the taxpayer for a debt that the taxpayer does not owe. Within IRS, in addition to the lack of a service-wide definition of taxpayer abuse, we also learned that IRS does not have specific management information to enable the Service to track and measure abuse. Rather, there are files maintained by various IRS offices that may contain taxpayer complaint information, such as congressional correspondence files maintained at the IRS National Office, district offices, and service centers, and Problem Resolution Office files maintained in IRS’ district offices and service centers. After discussions with IRS officials concerning data sources within the Service that we might use to find examples of potential taxpayer abuse, we decided to review three sources in particular: (1) Problem Resolution Office files maintained at each district office and service center, (2) congressional correspondence files maintained at the National Office and at each district office and service center, and (3) Internal Security investigative case files maintained at the National Office. We judgmentally selected and reviewed 421 fiscal year 1992 Problem Resolution Office files and 201 fiscal year 1992 congressional correspondence files from the field locations shown in table I.1. DO: District office. SC: Service center. In addition, at the National Office we reviewed summaries of all 909 Internal Security investigations closed during fiscal year 1992. From these three sources, we subjectively selected examples of taxpayer complaints that appeared to illustrate various issues within our definition of taxpayer abuse. Initially, we selected 139 examples that we believed indicated potential taxpayer abuse. From those, we further selected 24 that we used as a basis for evaluating IRS’ specific procedures, guidelines, and management oversight to protect against taxpayer abuse. We did the same for two additional potential examples of taxpayer abuse, one we identified in an IRS Internal Audit report, and a second we included because of extensive media coverage and its sensitivity. Although we did not follow up on each individual example to determine whether these taxpayers were actually abused by IRS, we cited them in our discussions with IRS managers to learn about the range of controls in place to prevent this type of taxpayer abuse. Further, our selection of these examples was intended for illustrative purposes only and did not indicate a frequency of occurrence. In our review, we made no attempt to statistically sample the files that we reviewed because they did not solely represent instances of potential taxpayer abuse. For example, we did not include taxpayer complaints concerning delays in receiving refund checks as an instance of taxpayer abuse. Therefore, we were unable to quantify the extent of potential taxpayer abuse by IRS employees. This was due to both the absence of information on the total universe of situations that may have involved taxpayer abuse and the difficulty of finding specific data concerning instances that could conclusively be defined as taxpayer abuse. As noted above, in our discussions with IRS managers, we used the examples we selected from IRS files to determine whether there were controls in place over IRS operations to prevent taxpayer abuse. Thus, we talked with officials knowledgeable about IRS operations, particularly those of the Collection, Examination, and Criminal Investigation Divisions, to determine the specific processes and procedures currently required in their respective enforcement efforts. In so doing, we attempted to get an understanding of the general controls applicable to these separate operations. The examples we selected, in some instances, enabled us to identify weaknesses in IRS’ current controls and procedures. In addition to discussions concerning specific issues and controls, we reviewed documentation related to IRS’ efforts to improve its treatment of taxpayers since we testified on this issue in 1982. We looked at initiatives mandated by Congress, such as the 1988 Taxpayer Bill of Rights, as well as initiatives set forth by IRS in its strategic business plan, such as the Compliance 2000 initiative, in which IRS plans to work closely with taxpayers to aid them in complying with the tax laws. We also reviewed a highly publicized allegation that a taxpayer was abused by IRS because of improper contacts from the White House and FBI. Due to the sensitivity of this allegation, we also looked into IRS’ controls related to contacts by the White House and FBI and determined whether taxpayer abuse actually occurred in this instance. To do this, we discussed the issue of controls with IRS officials and reviewed the related Internal Revenue Manual procedures. We also reviewed a White House Chief of Staff Management Review, an IRS Inspection report and supporting documents, and a Treasury OIG report and supporting workpapers, concerning their respective investigations of the abuse allegations. Finally, we discussed the allegations with officials of the White House, FBI, IRS Inspection Service, Treasury OIG, and representatives of the taxpayer. Because our review overlapped the OIG inquiry, both in terms of the time when the two reviews were being carried out and the issues they addressed, we established a joint working relationship, consistent with the cooperation expected between Inspectors General and GAO under the Inspector General Act of 1978. Through this relationship, we obtained access to the results of and workpapers supporting the OIG’s work, and we provided similar access to pertinent results and workpapers from our work. We relied heavily on OIG workpapers and interviews with OIG staff to corroborate information from IRS’ Inspection Service’s report concerning IRS employees’ actions. We did our work from April 1992 through January 1994 at IRS’ National Office; the North Atlantic and Southeast Regions; the Albany, Atlanta, Brooklyn, and Manhattan Districts; and the Atlanta and Brookhaven Service Centers. We also met with White House and FBI officials and with representatives of a taxpayer involved in one of the examples we reviewed. We did our work in accordance with generally accepted government auditing standards. The Acting Commissioner of Internal Revenue provided written comments on a draft of this report, and those comments are reprinted in appendix V. IRS has many operational controls in place to help govern its interactions with taxpayers that should aid in the prevention of taxpayer abuse. In recent years, IRS has also undertaken various initiatives to help improve how it deals with taxpayers. The key elements of IRS’ approach for preventing taxpayer abuse, such as (1) operational controls governing the actions of IRS’ enforcement functions, (2) processes for handling taxpayer complaints, and (3) offices for overseeing IRS’ operations, as well as recent IRS and congressional initiatives to better ensure that taxpayers are treated fairly in their dealings with IRS, are summarized below. IRS has a wide range of operational controls to govern its primary enforcement activities—examination, collection, and criminal investigation. Among these controls are some that IRS considers crucial in its overall efforts to safeguard taxpayers’ rights and prevent abuse. For example, a key control over examination activities is a separation of duties between IRS staff who identify tax returns with potential for a tax change and staff who conduct the actual tax examination. A key control over collection activities is a series of tax delinquency notices warning of pending enforcement actions that IRS sends to taxpayers before it actually initiates such actions. For criminal investigations, a key control is the required approval by a management official before IRS criminal investigators initiate such investigations. Specific operational controls and procedures are required when a taxpayer’s return is examined by IRS. Before an examination is done, IRS often has used a computer program to identify returns with potential for tax changes. Some of these computer-identified returns are to be automatically examined, such as those resulting in a refund of $200,000 or more. Others, such as those identified by IRS’ Discriminant Function formula, are to be screened by examination classifiers to further determine those with the greatest potential for tax changes. The returns selected through this screening process would be stored in inventory at the service center until requested by a district office examination manager, who would assign them to either a district office tax examiner or revenue agent to conduct the tax examination. Generally, noncomputer-identified returns, such as referrals from other IRS offices and state tax agencies, would also be (1) further screened by examination classifiers to identify those with the greatest potential for tax changes, (2) stored in inventory until requested by district office examination managers, and (3) assigned to be examined by a district office tax examiner or revenue agent. However, we identified some flaws in the controls for IGPs—a particular type of examination activity involving returns not selected by computer. Controls over IGPs are discussed in our report on page 10. When IRS notifies the taxpayer that his or her return will be examined, the taxpayer is to be provided with IRS Publication 1, “Your Rights as a Taxpayer,” describing the taxpayer’s rights related to the examination process. At the start of the examination, IRS examiners are to ask taxpayers if they received Publication 1. IRS Publication 1 informs taxpayers that they have the right to (1) representation, (2) record interviews with IRS personnel, (3) have their personal and financial information kept confidential, (4) receive an explanation of any changes to their taxes, and (5) appeal IRS’ findings through an IRS appeals office or through the court system. The appeals process provides an independent review of IRS examinations and protects against taxpayer abuse by helping to ensure that the taxpayer pays the correct tax. Similar controls and procedures are to be followed when IRS seeks to collect unpaid taxes from taxpayers. For example, IRS is to send taxpayers a series of computer-generated notices before taking any collection enforcement action, thereby enabling taxpayers to voluntarily settle their tax liabilities. IRS also is to send Publication 594, “Understanding the Collection Process,” with its first and last payment delinquency notices. This publication explains taxpayers’ payment alternatives and rights during the collection process, as well as the sequence of enforcement actions that IRS may use if the taxpayers fail to comply. When contacted by IRS collection staff, a taxpayer may seek an installment agreement or submit an offer-in-compromise as alternatives to full payment on demand. If the taxpayer believes that paying the tax would create a hardship, he or she can file an Application for Taxpayer Assistance Order, whereby IRS may agree to allow the taxpayer to defer payment until the taxpayer’s finances improve. If the taxpayer disagrees with the results of IRS’ collection action, he or she may seek an informal administrative review with an IRS manager. Taxpayers who disagree with certain collection actions, such as the assessment of a trust fund recovery penalty, may also pursue a formal appeal through an IRS Regional Director of Appeals or the court system. Various controls and procedures are also to be followed by the IRS when a taxpayer is the subject of an IRS criminal investigation. For example, the investigation is to be based on evidence of a possible criminal violation of the Internal Revenue law and it is to be approved by an IRS manager before it is started. At the first meeting between IRS agents and the taxpayer, IRS agents are required to explain the taxpayer’s rights, including the right to representation. If the taxpayer requests representation, the IRS agents are to terminate the meeting. Once the investigation is completed, IRS is required to notify the taxpayer. If IRS plans to recommend prosecution, the taxpayer may seek a conference with an IRS manager to determine the basis for such a recommendation. Prosecution recommendations are to be reviewed and approved by both the IRS District Counsel and the local U.S. Attorney before a case against the taxpayer is presented to a grand jury. Taxpayers have several ways to obtain help if they believe they have been abused by IRS staff. Taxpayers may seek help from supervisors, Problem Resolution Officers (PRO), or the directors of IRS’ local district offices and service centers. They may also complain directly to IRS’ National Office. IRS Publication 1 contains information on filing complaints with supervisors, PROs, and local office directors. Serious complaints involving potential integrity issues are to be referred to IRS’ Internal Security Division for investigation. Complaints of misconduct made against upper-level managers, senior executives, and IRS’ Inspection Service staff are to be referred to the OIG in the Department of the Treasury. IRS has a nationwide Problem Resolution Program, headed by the Taxpayer Ombudsman at the National Office and carried out by PROs in IRS’ 63 district offices and 10 service centers. PROs can help taxpayers who have been unable to resolve their problems after repeated attempts with other IRS staff. For example, PROs can help taxpayers who believe (1) their tax accounts are incorrect, (2) a significant item was overlooked, or (3) their rights were violated. PROs can ensure that action is taken when taxpayers’ rights were not protected, correct procedures were not followed, or incorrect decisions were made. PROs can also use authority provided by the Taxpayer Bill of Rights to order that an enforcement action be stopped or other action be taken when a taxpayer faces a significant hardship as a result of an IRS enforcement action. A significant hardship may occur when, as a result of the enforcement action, a taxpayer cannot maintain necessities such as food, clothing, shelter, transportation, or medical treatment. PROs do not resolve technical or legal questions. Such questions, as well as taxpayer complaints of harassment and discourteous treatment by IRS staff, are to be referred to IRS managers. PROs are to refer complaints involving potential employee integrity issues to Internal Security or, if a senior IRS official is involved, to the Treasury OIG. IRS’ Internal Security Division is required to investigate taxpayer complaints involving potential criminal misconduct, such as embezzlement by IRS staff and potential administrative misconduct, such as unauthorized access to a taxpayer’s account. Internal Security is to report its investigative results to IRS management for its use in determining appropriate personnel action. In addition, Internal Security can refer criminal violations to the local U.S. Attorney for prosecution. Internal Security is to refer other allegations of misconduct, such as discourteous treatment of taxpayers, to management officials. When handling these referrals and other less serious taxpayer complaints, supervisors are required to obtain a full explanation from both the taxpayer and employee before deciding how to resolve the problem. If they cannot determine how to resolve the problem, supervisors are to refer the unresolved complaints to the PRO. Although IRS’ Internal Audit Division usually neither receives nor investigates taxpayer complaints, in addition to performing its mission of reviewing IRS’ operations, it can review the results of Internal Security investigations. Both types of reviews could identify potential internal control weaknesses, some of which may identify possible taxpayer abuse. When such weaknesses are identified, Internal Audit can recommend that IRS management strengthen the controls in question. Internal Audit findings are to be disseminated to IRS’ district offices, so that similar potential control problems in other offices can be identified and acted upon. Thus, Internal Audit can serve as an important aid to management oversight. The OIG in the Department of the Treasury is to play an oversight role in protecting taxpayers from abuse. Soon after the OIG was established by Congress, allegations of misconduct by IRS officials led the Commissioner of Internal Revenue to transfer staff and funds to the OIG for investigating allegations involving IRS officials above grade 14 of the General Schedule. The OIG also conducts reviews of IRS’ Internal Security and Internal Audit Divisions, and it has the authority to review any IRS activity the Inspector General believes warrants such attention. In the 1980s, both new laws and new IRS initiatives improved taxpayers’ ability to resolve problems with IRS. This has been particularly noticeable since 1988, when Congress passed the Taxpayer Bill of Rights. We believe this legislation, coupled with various IRS initiatives, such as those involving quality management, ethics and integrity, a collection appeals process, and modernizing its computer systems, has improved the potential for fair and reasonable treatment of taxpayers in their dealings with IRS. These efforts should also lessen the potential for taxpayer abuse by IRS employees. In 1988, Congress passed the Taxpayer Bill of Rights, which caused IRS to take steps to improve its interaction with taxpayers. The Act contained 21 provisions affecting a wide range of issues. For example, it clarified certain basic rights of taxpayers and required IRS to provide taxpayers with a statement of these rights. To fulfill this requirement, IRS developed Publication 1, “Your Rights as a Taxpayer,” which is to be given to all taxpayers who are subject to examination and collection actions. Among other provisions, the act clarifies a taxpayer’s right to representation in dealing with IRS and provides additional methods to resolve disputes over IRS’ interpretation and administration of the tax laws. A key provision of the act authorizes the Taxpayer Ombudsman or any designee of the Ombudsman—who reports only to the Commissioner of Internal Revenue—to issue Taxpayer Assistance Orders to rescind or change enforcement actions that caused or might cause a significant hardship for the taxpayer. Although few of these formal orders have been issued, the authority provided by the act and three key decisions IRS made to implement the act greatly strengthened the ability of the PROs to assist taxpayers. IRS decided to (1) expand the act’s definition of “hardship” to include not only hardships caused by its administration of the tax laws, but all hardships that it could reasonably relieve; (2) provide assistance, when reasonable, to hardship applicants who did not meet IRS’ hardship criteria, but who could be helped, either through the Problem Resolution Program or by another IRS unit; and (3) instruct its employees to initiate hardship applications on behalf of taxpayers when employees encountered situations that might warrant assistance. We discussed IRS’ implementation of this, and other provisions of the act in a 1991 report. Our report confirmed that IRS had assisted taxpayers who applied for hardship whether or not they met the hardship criteria. IRS statistics showed that over 32,000 taxpayers—about 70 percent of all applicants—had received assistance. (See appendix III for a detailed description of the provisions of the act.) In 1985, IRS established a Commissioner’s Quality Council and began developing a service-wide quality improvement initiative designed to identify and satisfy customers’ needs. Since that time, Internal Revenue Commissioners have defined IRS’ objectives in terms of both increasing customer service and reducing taxpayer burden. As a result of the emphasis on meeting customers’ needs, IRS developed customer service training that focuses on improving staff interaction with taxpayers in an effort to attain greater customer satisfaction and confidence. In addition to customer service training, IRS has also recently conducted customer satisfaction surveys, including surveys of those taxpayers who had been subjected to IRS’ examination and collection actions. Overall, these surveys have shown that there were more respondents who believed that IRS had treated them fairly than respondents who believed that IRS had treated them unfairly. For example, in one survey of taxpayers in general, 32 percent of the respondents gave IRS a high rating for fairly applying the tax laws and 17 percent gave IRS a low rating. In another survey of taxpayers who had been audited by IRS, 50 percent gave IRS a high rating for fair treatment and 16 percent gave IRS a low rating. In a survey of taxpayers who had been subjected to IRS collection action, 42 percent of those who responded gave IRS a high rating for fairness and 28 percent gave IRS a low rating. As a continuation of its emphasis on treating taxpayers as customers, IRS has embarked on a service-wide initiative called Compliance 2000, in which IRS staff are to use assistance and education to aid taxpayers in complying with the tax laws. A goal of this initiative is to reduce the need for examination and collection actions against those taxpayers who would voluntarily comply with the tax laws if they fully understood how to do so, thus enabling IRS to concentrate its enforcement efforts against those who intentionally fail to comply with the tax laws. If this initiative has the intended effect, more taxpayers may avoid noncompliance with the tax laws, thus reducing their interaction with IRS and the potential for taxpayer abuse. Congressional hearings in 1989 and 1990 questioned IRS’ overall standards of ethics and integrity. To address these concerns, IRS began a long-term effort to enhance its ethics and integrity programs and to improve staff awareness of integrity issues throughout the Service. As part of this effort, IRS published an Ethics Plan that called for IRS to develop and deliver ethics training to all its employees. As of September 30, 1992, 14,000 IRS managers had completed an ethics training course developed for IRS by the Josephson Institute of Ethics. As of the end of Fiscal Year 1993, IRS had provided ethics training to the remainder of its employees. In addition to developing an Ethics Plan, IRS responded to congressional concerns about whether it could adequately and independently investigate ethical misconduct on the part of its senior employees by permanently transferring 21 staff years and $1.9 million to the OIG of the Department of the Treasury. The OIG planned to use these resources to oversee IRS’ Office of Inspection, investigate allegations of misconduct by IRS senior employees, and conduct special reviews of IRS operations. Over time, IRS’ emphasis on ethics and integrity should have a positive impact on how IRS employees conduct themselves when dealing with the public. When IRS collects unpaid taxes, it is to distinguish between those taxpayers who show a sincere effort to meet their tax obligations and those who do not. If full payment is not possible, IRS collection officials are required to consider each of the payment options available to taxpayers, and attempt to find the best way for them to voluntarily pay the taxes they owe. If a taxpayer does not make an attempt to pay a tax bill, IRS may take actions to enforce the notice and demand for payment, such as (1) file a notice of federal tax lien, (2) serve a notice of levy, and (3) seize and sell a taxpayer’s property. IRS collection officials can recommend enforcement actions on the basis of contact with the taxpayer and analysis of his or her income, expenses, and assets. They have discretionary power in carrying out these actions, and their decisions often result as much from their judgment as from the payment history of the taxpayer. In reaching their determinations, collection staff are to consider such issues as whether (1) the taxpayer has a history of unreasonably delaying the collection process, (2) the taxpayer is a tax protestor, and (3) collection of the tax is threatened or in jeopardy. If a taxpayer disagrees with a revenue officer’s collection decision, he or she may raise the issue with the revenue officer’s supervisor. Alternatively, the taxpayer may contact the Problem Resolution Office to complain about collection actions. Problem Resolution officials have the authority to overturn collection decisions when issues of hardship arise. Currently, there is no formal appeals procedure for taxpayers who disagree with IRS’ collection actions, with the exception of cases involving the trust fund recovery penalty, rejected offers-in-compromise, and specified penalty issues. One provision of the taxpayer rights legislation introduced in Congress in 1992 and again in 1993 called for a pilot program to study the merits of a formal appeal procedure for taxpayers who disagree with collection enforcement actions. IRS established such a pilot program in the Indianapolis District on March 30, 1992, later expanded it, and is currently evaluating its effectiveness. IRS is gathering data on how often taxpayers appealed IRS’ collection actions, how often its decisions were upheld or reversed, the costs of such a program and its benefits to IRS and taxpayers, and the effects such a program would have on the number of IRS’ collection actions. IRS recently expanded the program to other locations and plans to eventually determine the need for a formal collection appeals process. IRS is currently implementing TSM, which is a long-term strategy to modernize IRS’ computer and telecommunications systems. While some phases of TSM are already underway, it is expected to be fully implemented early next century and should greatly enhance IRS’ capability to serve taxpayers and reduce their burden when dealing with IRS. TSM has already benefited some taxpayers. For example, one aspect of TSM—Electronic Filing—allows taxpayers to file their returns more quickly and accurately and also to receive their refunds more quickly. In the future, TSM is expected to eliminate mailing unnecessary computer generated correspondence to taxpayers who have already responded to prior notices. In addition, with proper controls, by making more information readily available to IRS staff, TSM should reduce the time it takes to answer taxpayers’ questions and resolve taxpayers’ problems, both of which could be a source of frustration and may be perceived by some taxpayers to be a form of abuse. Tax Administration: IRS’ Implementation of the Taxpayer Bill of Rights (GAO/T-GGD-92-09, Dec. 10, 1991). Tax Administration: IRS’ Implementation of the 1988 Taxpayer Bill of Rights (GAO/GGD-92-23, Dec. 10, 1991). This testimony and report assessed IRS’ implementation of seven key provisions of the 1988 Taxpayer Bill of Rights and stated that while IRS had successfully implemented them in general, there were areas in which IRS could more consistently treat taxpayers, such as notifying them when IRS cancels installment agreements. IRS Policies and Procedures to Safeguard Taxpayer Rights and the Effects of Certain Provisions of the 1976 Tax Reform Act (Testimony - Apr. 26, 1982). This testimony concluded that while there may have been instances in which IRS violated a taxpayer’s rights, we found no evidence to indicate that such instances were widespread or systemic. IRS Information Systems: Weaknesses Increase Risk of Fraud and Impair Reliability of Management Information (GAO/AIMD-93-34, Sept. 22, 1993). This report identified weaknesses in IRS’ general controls over its computer systems which resulted in various problems, such as unauthorized access to taxpayers’ account information by IRS employees. Tax Systems Modernization: Concerns Over Security and Privacy Elements of the Systems Architecture (GAO/IMTEC-92-63, Sept. 21, 1992). This report raised concerns about the need for IRS to clearly delineate responsibility for protecting the privacy of taxpayer information. Tax Administration: New Delinquent Tax Collection Methods for IRS (GAO/GGD-93-67, May 11, 1993). This report highlighted improvements that IRS could make in its lengthy and rigid collection process for delinquent tax debts. Tax Administration: IRS’ Management of Seized Assets (GAO/T-GGD-92-65, Sept. 24, 1992). This testimony stated that IRS has inadequate controls to protect taxpayer property it seizes and that IRS’ practices for disposing of seized property do not always provide the best return for the taxpayer. Tax Administration: Extent and Causes of Erroneous Levies (GAO/GGD-91-9, Dec. 21, 1990). This report showed that IRS initiated over 16,000 erroneous levies against taxpayers in Fiscal Year 1986 and recommended that IRS institute a nationwide levy verification program to significantly reduce the number of erroneous levies. Tax Administration: IRS Can Improve the Process for Collecting 100-Percent Penalties (GAO/GGD-89-94, Aug. 21, 1989). This report analyzed IRS’ process for collecting the 100-percent penalty and recommended several actions IRS should take to make the process more efficient and effective. Tax Administration: IRS Should Expand Financial Disclosure Requirements (GAO/GGD-92-117, Aug. 17, 1992). This report recommended that IRS could better detect and prevent employee conflicts of interest by expanding its financial disclosure requirements. Tax Administration: IRS’ Progress on Integrity and Ethics Issues (GAO/T-GGD-92-62, July 22, 1992). Internal Revenue Service: Status of IRS’ Efforts to Deal With Integrity and Ethics Issues (GAO/GGD-92-16, Dec. 31, 1991). This testimony and report dealt with the progress IRS has made in addressing problems we had identified related to ethics and integrity issues and suggested that IRS make better use of its management information system to monitor disciplinary actions against its employees. IRS’ Efforts to Deal With Integrity and Ethics Issues (GAO/T-GGD-91-58, July 24, 1991). Internal Revenue Service: Employee Views on Integrity and Willingness to Report Misconduct (GAO/GGD-91-112FS, July 24, 1991). This testimony and fact sheet outlined IRS’ efforts, in conjunction with the Treasury Inspector General, to deal with concerns about integrity and ethics at IRS. IRS Data on Investigations of Alleged Employee Misconduct (GAO/T-GGD-89-38, July 27, 1989). Tax Administration: IRS’ Data on Its Investigations of Employee Misconduct (GAO/GGD-89-13, Nov. 18, 1988). This testimony and report pointed out various weaknesses with IRS’ Internal Security Management Information System related to the outcomes of employee misconduct investigations and also highlighted IRS’ plans to develop a new and improved management information system. Andrew Macyko, Regional Assignment Manager Robert McKay, Evaluator-in-Charge Richard Borst, Senior Evaluator Bryon Gordon, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed whether the Internal Revenue Service (IRS) has adequate controls to prevent taxpayer abuse and whether additional appropriations are needed to strengthen IRS ability to prevent taxpayer mistreatment. GAO found that: (1) although IRS has undertaken several initiatives to prevent taxpayer abuse, evidence of abuse remains; (2) IRS has implemented a wide range of controls, processes, and oversight offices to govern staff behavior in their contacts with taxpayers; (3) IRS needs to better define taxpayer abuse and develop management information about its frequency and nature so that it can strengthen abuse prevention procedures, and identify and minimize the frequency of future abuses, and Congress can better evaluate IRS performance in protecting taxpayers' rights; (4) IRS needs to strengthen its controls and procedures to reduce unauthorized access to computerized tax information by IRS employees, inappropriate selection of tax returns during information gathering projects, embezzlement of taxpayers' cash payments, questionable trust fund recovery penalties, and information-handling problems that contribute to taxpayer frustration; (5) proposed taxpayer protection legislation would aid IRS in providing taxpayers with information needed to better deal with trust fund recovery penalties; (6) the allegation of potential abuse involving possible improper contacts with IRS by White House staff was unfounded; (7) the White House has provided explicit guidance for its staff regarding IRS contacts and IRS should improve its procedures for handling White House contacts; and (8) although Congress may not need to provide additional appropriations to IRS to prevent taxpayer abuse, additional appropriations may be needed to resolve IRS information-handling problems as part of its Tax Systems Modernization (TSM) program. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Within the Department of Health and Human Services (HHS), CMS is responsible for overseeing Medicaid at the federal level, while states are responsible for the day-to-day operations of their Medicaid programs. Under section 1115 of the Social Security Act, the Secretary of HHS may waive certain Medicaid requirements to allow states to implement demonstrations through which states can test and evaluate new approaches for delivering Medicaid services that, in the Secretary’s judgment, are likely to assist in promoting Medicaid objectives. Prior to the enactment of PPACA, states that wanted to expand Medicaid coverage to childless adults could do so only under a demonstration. While states may now expand their programs to cover these individuals through a state plan amendment, some states have expanded coverage under demonstrations in order to tailor coverage for this group in a manner that differs from what federal law requires. For example, states are not permitted to exclude coverage of mandatory benefits, such as NEMT, under their state plans, but they may do so by obtaining a waiver of the requirement under a demonstration. Recently, the Secretary of HHS approved various states’ demonstrations to test alternative approaches, such as allowing states to use Medicaid funds to provide newly eligible enrollees with premium assistance to purchase private health plans in their respective state marketplace, or to exclude from coverage certain mandatory Medicaid benefits, such as NEMT, for newly eligible enrollees. CMS has required those states that have obtained approval to exclude the NEMT benefit for one year under a demonstration to submit annual evaluations on the effect of this change on access to care, which will inform the agency’s decision to approve any extension requests. Evaluations by research organizations identified the lack of transportation as a barrier to care that can affect costs and health outcomes. For example, a survey of adults in the National Health Interview Survey found that limited transportation disproportionally affected Medicaid enrollees’ access to primary care. A study by the Transportation Research Board found that individuals who miss medical appointments due to transportation issues could potentially exacerbate diseases, thus leading to costly subsequent care, such as emergency room visits and hospitalizations. We also previously reported that there are many federal programs, including Medicaid, that provide the NEMT benefit to the transportation-disadvantaged population. However, in this work we found that coordination of NEMT programs at the federal level is limited, and there is fragmentation, overlap, and the potential for duplication across NEMT programs. As a result, individuals who rely on these programs may encounter fragmented services that are narrowly focused and difficult to navigate, possibly resulting in NEMT service gaps. Among the 30 states that expanded Medicaid as of September 30, 2015, 25 reported that they did not undertake efforts to exclude the NEMT benefit for newly eligible Medicaid enrollees and were not considering doing so. Three states reported pursuing such efforts, and two states did not respond to our inquiry, although CMS indicated that neither of these states undertook efforts to exclude the NEMT benefit. (See fig. 1.) Three states (Indiana, Iowa, and Arizona) reported undertaking efforts to exclude the NEMT benefit under a demonstration as part of a broader health care initiative to expand Medicaid in their respective states. However, only Indiana and Iowa had received approval from HHS for these waivers as of September 30, 2015, while Arizona was still seeking approval. Indiana: Indiana’s effort to exclude the NEMT benefit from coverage pre-dates PPACA and is not specific to newly eligible enrollees under the state’s expansion. Beginning in February 2015, Indiana expanded Medicaid under a demonstration that provides two levels of coverage for newly eligible enrollees, depending on their income level and payment of premiums. As part of this demonstration, the state received approval to exclude the NEMT benefit for newly eligible enrollees. Indiana’s efforts to implement its Medicaid expansion are based, in part, on another demonstration that the state has had in place since 2008. Under this older demonstration, which provided authority for the state to offer Medicaid coverage to certain uninsured adults, NEMT was not a covered benefit for this population. Iowa: Iowa expanded Medicaid in response to PPACA through two demonstrations beginning in January 2014. Under these demonstrations, the state offers two separate programs for newly eligible enrollees: one that offers Medicaid coverage administered by the state to enrollees with incomes up to 100 percent of the FPL, and a second that offers premium assistance to purchase private coverage through the state’s health insurance marketplace for those enrollees with incomes from 100 to 133 percent of the FPL. For both of these demonstrations, the state received approval to exclude the NEMT benefit. Similar to Indiana, Iowa’s effort to exclude the NEMT benefit for a portion of its Medicaid population pre-dates PPACA. In July 2005, Iowa expanded Medicaid to certain populations under a demonstration with limited benefits that did not include NEMT. Arizona: When Arizona expanded Medicaid in January 2014, it had not sought to exclude the NEMT benefit for newly eligible enrollees. However, when the state submitted a request on September 30, 2015, to extend its longstanding demonstration, it sought approval to exclude the NEMT benefit. Arizona’s proposed extension would require newly eligible adults, including those with incomes from 100 to 133 percent of the FPL, to enroll in a new Medicaid program that includes enrollee contributions into an account that can be used for non-covered services and an employment incentive program. The proposed extension, including the request to exclude the NEMT benefit, was under review, as of November 2015. Officials from these three states cited several reasons for their efforts to exclude the NEMT benefit, including a desire to align Medicaid benefits with private insurance plans, which typically do not cover this benefit. Indiana officials indicated that when the state initially developed its demonstration in 2008, they designed benefits for a low-income population that tended to be employed. Thus, under that demonstration they offered benefits that resembled private insurance in an effort to familiarize enrollees with private coverage. This experience largely influenced the state’s decision under its current demonstration to exclude the NEMT benefit for newly eligible enrollees. Iowa officials reported that when the state expanded Medicaid, they wanted Medicaid benefits to look like a private insurance plan—with the hope of limiting disruptions in service as fluctuations in income could result in changes to enrollees’ coverage. While Arizona officials also cited the state’s intent to align Medicaid benefits with private health insurance, they also noted that excluding the NEMT benefit would be one way to contain costs. Of the remaining 25 Medicaid expansion states, 14 offered reasons for why they did not exclude the NEMT benefit for newly eligible enrollees. Officials from 8 states reported they did not pursue such efforts because they considered the NEMT benefit critical to ensuring enrollees’ access to care. Officials from an additional 4 states reported that they wanted to align benefits for the newly eligible enrollees with those offered to enrollees covered under the traditional Medicaid state plan. Officials from 2 other states reported that the newly eligible Medicaid enrollees did not significantly increase their program enrollment, and therefore, there was no need to alter this benefit. The two states that excluded the NEMT benefit are in different stages of completing required evaluations of the effect of this exclusion on access to care. Research and advocacy groups indicated that excluding the NEMT benefit could affect enrollees’ access to services and costs of coverage, and could set a precedent for the Medicaid program moving forward. The two states that obtained approval to exclude the NEMT benefit for newly eligible Medicaid enrollees—Indiana and Iowa—are at different stages of evaluating the effect this will have on enrollees and have different time frames for reporting their results. Indiana officials indicated that the state is currently working with CMS on the design of its evaluation and must submit results to CMS by February 29, 2016. According to a draft of the evaluation design, the state plans to survey enrollees and providers to compare the experiences of Medicaid enrollees with and without the NEMT benefit with respect to missed appointments, preventative care, and overall health outcomes; the state also seeks to determine whether enrollees residing in certain parts of the state are more affected by a lack of this benefit. Similarly, Iowa, which excluded the NEMT benefit for all newly eligible enrollees beginning in January 2014, was required to submit a series of independent analyses to CMS and recently received approval to continue its exclusion of this benefit until March 2016. The state conducted an analysis to determine whether newly eligible enrollees’ access to services was affected and reported its results in April 2015. Developed in close consultation with CMS, the analysis focused on the comparability of experiences of enrollees covered under the Medicaid state plan (who have the NEMT benefit) with newly eligible Medicaid expansion enrollees (who do not have the NEMT benefit). With such a focus, the analysis sought to determine whether excluding the NEMT benefit presented more of a barrier to obtaining services than an enrollee would have otherwise experienced under the state’s Medicaid state plan. Using enrollee surveys, the analysis found little difference in the barriers to care experienced by the two groups of enrollees as a result of transportation- related issues. For example, the analysis noted that about 20 percent of enrollees in both groups reported usually or always needing help from others to get to a health care appointment. Additionally, the analysis identified comparability between both groups of enrollees in terms of their reported unmet need for transportation to medical appointments (about 12 percent of both groups) and reported worry about the ability to pay for the associated costs (13 percent of both groups). However, looking within the group of newly eligible enrollees without the NEMT benefit, the Iowa evaluation found that those with lower incomes— under 100 percent of the FPL—tended to need more transportation assistance and have more unmet needs than those with higher incomes. For example, 25 percent of newly eligible enrollees with incomes under 100 percent of the FPL reported needing help with transportation, compared with 11 percent of higher income newly eligible enrollees; 15 percent of newly eligible enrollees with incomes under 100 percent of the FPL reported an unmet need for transportation, compared with 5 percent of higher income newly eligible enrollees; and 14 percent of newly eligible enrollees with incomes under 100 percent of the FPL reported that they worried a lot about paying for transportation, compared with 6 percent of higher income newly eligible enrollees that reported that they worried a lot about paying for transportation. HHS recently approved Iowa’s amendment to continue its waiver of the NEMT benefit, although it noted concern about the lower income enrollees’ experience. In approving the state’s request, HHS cited the need for Iowa to continue evaluating the effects of the waiver in light of survey results on the type of transportation that newly eligible enrollees reported using to get to health care appointments. These results showed that newly eligible enrollees tended to rely on others, such as family and friends, to reach health care appointments more so than Medicaid state plan enrollees. Researchers who conducted the evaluation of Iowa’s program indicated that they plan to conduct additional analyses, which include some—but not all—of the suggestions we have offered. For example, our review of Iowa’s evaluation methodology suggests that linking survey responses from both groups of enrollees directly to their claims could improve the state’s understanding of enrollees’ patterns of utilization and the implications of transportation difficulties. The researchers indicated that they will link claims data with survey responses in the next evaluation and use regression modeling to determine which group of enrollees was more likely to have an unmet need due to transportation issues. Additionally, we noted that the small sample size could limit their ability to detect differences between the enrollees. The researchers indicated that their next evaluation will survey a larger sample of Medicaid enrollees covered under the state plan who have the NEMT benefit and newly eligible enrollees who do not in an effort to increase their ability to detect these differences. We agree that increasing the sample size could strengthen confidence in the results. We also noted that the researchers did not consider whether survey respondents lived in a rural or urban area, which can be important because research shows that the need to travel longer distances and the lack of public transportation in rural areas can pose challenges for individuals seeking services. The researchers indicated that they did not stratify the groups by rural or urban areas because of a concern about inadequate sample sizes in certain counties and because the need for transportation in Iowa is not unique to the pursuit of health care services, but also poses a challenge in other aspects of residents’ lives. While stratifying survey results by rural and urban areas could be relevant in evaluating enrollees’ access to care or unmet need, the researchers do not plan to include a rural and urban stratification in the next evaluation. CMS officials recognized the value of a rural and urban distinction, but indicated that there is a need to balance further analysis with the ability to generate results expeditiously and facilitate decision making on the waiver. Officials from the 10 research and advocacy groups we interviewed— which represent Medicaid enrollees, underserved populations, and health care providers—noted potential concerns of excluding the NEMT benefit as it relates to enrollee access to services and costs of coverage. Access to Services: Officials from 9 of the 10 groups we interviewed indicated that excluding the NEMT benefit would impede newly eligible enrollees’ ability to access health care services, particularly individuals living in rural or underserved areas, as well as those with chronic health conditions. For example, officials from one national group that represents underserved populations indicated that the enrollees affected by the lack of the NEMT benefit will be those living in rural areas that must travel long distances for medical services. Another group that represents providers in Iowa also cited the difficulty faced by enrollees that live in rural areas by noting that some of the patients they served have had to cancel their medical appointments because the patients do not have a car, money to pay for gas, or access to public transportation. With respect to enrollees with chronic health conditions, one group that represents transportation providers (and others who support mobility efforts) noted that transportation can be a major barrier for individuals who are chronically ill and need recurring access to life-saving health services. Similarly, another group that represents community health centers specified that those with mental health conditions are particularly vulnerable due to a lack of transportation. Costs of Coverage: Officials from 5 of the 10 groups we interviewed also noted that efforts to exclude the NEMT benefit can have implications on the costs of care because patients without access to transportation may forgo preventive care or health services and end up needing more expensive care, such as ambulance services or emergency room visits. For example, a national group that represents providers of services for low-income populations noted that for people who are receiving regular substance abuse treatments, missing appointments can make them vulnerable to relapsing, which ultimately drives up the cost of their care. Another national group that represents underserved populations indicated that they have seen low-income individuals who do not have a car and cannot afford public transportation use higher-cost care from emergency rooms for their medical problems because they cannot otherwise access care. One other group that represents providers noted that by driving up the cost of care, a lack of transportation will ultimately trickle down to lower reimbursement rates for providers. Despite these potential implications, officials from 9 of the 10 groups we interviewed acknowledged various advantages of a state expanding Medicaid even with a more limited benefit. For example, officials from 4 groups remarked that some coverage is better than no coverage in light of the significant health care needs among low-income populations. These groups recognized the political challenges that have driven state decisions whether to expand Medicaid and the concessions that are needed for an expansion to occur. For example, officials from a group that represents providers in Iowa indicated that although introducing variations in Medicaid programs adds complexity for providers, patients, and the state, flexibility is important in helping a state find a coverage solution that works in its political climate. Similarly, an advocacy group from one state acknowledged that an expansion with full traditional Medicaid benefits was never going to be achieved in that state, given the political environment. As such, groups within that state’s provider community broadly supported the state’s effort to expand Medicaid—even without the NEMT benefit—because so much of the population was uninsured and needed this coverage. However, while acknowledging their preference for states to expand Medicaid, three groups we spoke with maintained their concerns about the effects of such efforts on enrollees’ access to care. Officials from two of these groups said that improvements in the number of people covered should not be achieved by eroding essential services, while an official from the other group questioned the value of having health coverage if an enrollee is unable to get to the location where services are provided. Further, officials from six of the groups we interviewed were concerned that HHS’s approvals of state efforts to exclude the NEMT benefit potentially provide other states with an incentive to pursue similar efforts. These six groups raised concerns that every time HHS approves such an effort, a new baseline is created for what states may request in an effort to exclude core Medicaid services. We provided a draft of this report to HHS for comment. The department provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Carolyn L. Yocom at (202) 512-7114 or [email protected], or Mark L. Goldstein at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the individuals named above, other key contributors to this report were Susan Anthony, Assistant Director; Julie Flowers; Sandra George; Drew Long; Peter Mann-King; JoAnn Martinez-Shriver; Bonnie Pignatiello Leer; Laurie F. Thurber; and Eric Wedum. | Medicaid, a federal-state health financing program for certain low-income individuals, offers NEMT benefits to individuals who are unable to provide their own transportation to medical appointments. This benefit can be an important safety net for program enrollees as research has identified the lack of transportation as affecting Medicaid enrollees' access to services. Under PPACA, states can opt to expand eligibility for Medicaid to certain adults. However, some states have excluded the NEMT benefit for these newly eligible enrollees by obtaining a waiver of the requirement under the authority of a Medicaid demonstration project. GAO was asked to explore state efforts to exclude the NEMT benefit for newly eligible Medicaid enrollees, and the potential implications of such efforts. This report examines (1) the extent to which states have excluded this benefit for newly eligible enrollees, and (2) the potential implications of such efforts on enrollees' access to services. GAO contacted the 30 states that expanded Medicaid under PPACA as of September 30, 2015; reviewed relevant documents and interviewed officials in the 3 states that have taken efforts to exclude the NEMT benefit; reviewed prior research on transportation for disadvantaged populations; and interviewed officials from CMS, the federal agency that oversees Medicaid, and 10 research and advocacy groups based on referrals from subject-matter experts and knowledge of the NEMT benefit. HHS provided technical comments on a draft of this report, which GAO incorporated as appropriate. States' efforts to exclude nonemergency medical transportation (NEMT) benefits from enrollees who are newly eligible for Medicaid under the Patient Protection and Affordable Care Act (PPACA) are not widespread. Of the 30 states that expanded Medicaid as of September 30, 2015, 25 reported that they did not undertake efforts to exclude the NEMT benefit for newly eligible enrollees, 3 states reported pursuing such efforts, and 2 states—New Jersey and Ohio—did not respond to GAO's inquiry. However, the Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), indicated that neither New Jersey nor Ohio undertook efforts to exclude the NEMT benefit. Two of the three states pursuing efforts to exclude the NEMT benefit—Indiana and Iowa—have received waivers from CMS to exclude the benefit, and are in different stages of evaluating the effect these waivers have on enrollees' access to care. Indiana's draft evaluation design describes plans to survey enrollee and provider experiences to assess any effect from excluding the NEMT benefit. Iowa's evaluation largely found comparable access between enrollees with and without the NEMT benefit; however, it also found that newly eligible enrollees beneath the federal poverty level tended to need more transportation assistance or have more unmet needs than those with higher incomes. Officials from the groups that GAO interviewed identified potential implications of excluding the NEMT benefit, such as a decrease in enrollee access to services and an increase in the costs of coverage. For example, nearly all of the groups indicated that excluding the NEMT benefit would impede access to services, particularly for those living in rural areas, as well as those with chronic health conditions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
USO is a congressionally chartered, nonprofit, nongovernmental, and charitable corporation whose mission is to enhance the quality of life for U.S. armed forces personnel and their families. The USO World Headquarters acts as the enabling body for the organization, sets overall policy and strategy, is responsible for the operation of overseas USO centers, and produces overseas celebrity entertainment tours in partnership with AFEO. From World War II through the Vietnam War, USO and DOD partnered to enhance troop morale and provide entertainment to military outposts worldwide. Following the Vietnam War, legislation establishing USO’s federal charter and various DOD directives and instructions formalized this close association and made DOD resources, including funds, available to the maximum extent possible to support USO’s mission. DOD uses both appropriated and nonappropriated resources to support USO’s operations. Appropriated support is derived from DOD’s O&M funds, and nonappropriated support is provided largely through DOD-donated goods, services, and infrastructure. DOD regulations designate (1) the Under Secretary of Defense for Personnel and Readiness as the official liaison between DOD and USO and (2) AFEO, a joint-service operation, as the DOD liaison office for USO. AFEO, established in 1951, administers DOD’s Armed Forces Entertainment Program in partnership with USO. The U.S. Air Force is the executive agent for AFEO, having assumed that role from the U.S. Army in 1997. AFEO’s mission is to provide free, high quality, live entertainment to U.S. military personnel and their families stationed overseas. AFEO supplies all noncelebrity entertainment, and USO is the primary provider of celebrity entertainment. Noncelebrity entertainment is made up of up- and-coming performers professionally managed by an agent; celebrity entertainment consists of well-known entertainers, listed in Billboard or with gold or platinum recordings. Under a contractual arrangement with AFEO, USO recruits celebrity performers for the Armed Forces Entertainment Program. AFEO reimburses USO for certain tour-related expenses such as honoraria, production support, and other direct costs. In some cases, AFEO and other DOD entities also make arrangements to support USO overseas tours and pay directly for these expenses, such as for commercial airfares, visas, passports, and military airlift services, from their respective O&M accounts. Also, USO has agreed to pay for certain tour-related costs, for example, paying the difference between the cost of business-class and first-class air travel and the travel costs for individuals accompanying performers whose costs are not covered under the contract with AFEO. Following the 1991 Gulf War, USO faced serious financial problems because of declining contributions and therefore became concerned about its continued ability to serve the military. To address these concerns, USO’s Board of Governors established the Spirit of Hope Endowment Fund in 1998. According to a former USO official, the intent of the fund was to infuse USO with funds to provide for the perpetuity of its programs and services. To assist USO, the Congress, beginning in fiscal year 2000, provided a total of $23.8 million in O&M funds in the form of grants for USO. As of September 2003, DOD had provided about $20.8 million to USO. USO used these funds as seed money for the endowment. During fiscal years 2000 through 2002, DOD provided substantial appropriated and nonappropriated support, but the total amount cannot be determined because of limitations in DOD’s and USO’s record-keeping systems. For this 3-year period, we identified at least $34.7 million in appropriated funds that DOD provided to support USO activities in the form of grants, contract reimbursements, and direct payments. DOD also provided other appropriated support such as lodging, transportation, and use of some facilities. However, we could not identify the total monetary value of DOD’s support derived from appropriated funds because neither DOD nor USO has record-keeping systems to aggregate or report the needed information. While DOD also provides nonappropriated support, largely in the form of in-kind goods (e.g., food and refreshments), services (e.g., Internet and telephone access), and infrastructure support (some performance facilities), to help sustain USO’s overseas operations, the same limitations precluded us from determining the total monetary value for this support. During fiscal years 2000 through 2002, USO received appropriated and nonappropriated support from a variety of DOD sources. As figure 1 shows, this appropriated money flowed to USO in the form of grants awarded by the Office of the Secretary of Defense (OSD) and from contract reimbursements and direct payments provided by AFEO and other DOD components. Nonappropriated support was provided largely through in-kind contributions that included goods (e.g., food and refreshments), services (e.g., Internet and telephone access), and infrastructure support (some performance facilities), contributed by various DOD components. We identified at least $34.7 million in appropriated funds that DOD provided to support USO’s activities during fiscal years 2000 through 2002. As table 1 shows, this funding included grants and contract reimbursements to USO and direct payments by DOD. We also found that DOD components often provide in-kind support, derived from appropriated funds, to USO for its overseas tours such as transportation, free lodging, and some office and performance facilities. During fiscal years 2000 through 2003, the Congress authorized DOD to provide a total of $23.8 million in grants to support USO’s activities. As of September 2003, in fiscal years 2000 through 2002, DOD had provided a total of $20.8 million in grants to USO as seed money to fund the Spirit of Hope Endowment Fund, which is intended to ensure the continued existence of USO’s programs and services. The Congress provided the funds through DOD’s O&M appropriation in four annual defense appropriations acts. The funds, appropriated only for grants to USO, were first allocated to the Deputy Assistant Secretary of Defense for Personnel Support, Families and Education. In 1998, USO established the Spirit of Hope Endowment Fund and, after receiving the grants from DOD, transferred the funds into the endowment fund. According to USO policy, the USO Board of Governors established the Spirit of Hope Endowment Fund, which is a restricted account. Money placed into the fund is to be considered as principal and must remain in the account. USO can use the income (e.g. interest and dividends) that accrues on the balance held in the endowment fund to support its operations. USO used about $333,000 in investment income in calendar years 1999 and 2000 for its operations. AFEO provided USO with about $12.1 million in contract reimbursements during fiscal years 2000 through 2002. In September 1999, AFEO awarded an $8.7 million sole source, indefinite delivery, indefinite quantity contract to USO. The purpose of this contract was to provide celebrity entertainment for U.S. armed forces at military installations overseas. The contract performance period was for 3 years (October 1, 1999, to September 30, 2002) with five 1-year option periods (October 1, 2002, to September 30, 2007). According to AFEO and Air Force contracting officials, AFEO spent the entire $8.7 million before the end of the first 3-year period, and it is currently amending the contract to increase the amount of funding. In addition to the $8.7 million contract, AFEO negotiated separate purchase orders for costs associated with specific USO tours. The terms of the $8.7 million contract applied to each of these separately negotiated purchase orders. Specifically, the contract provided reimbursements to USO for administrative support services—accounting and administrative services needed to plan and execute overseas tours, including compiling and submitting voucher packages to AFEO for expense reimbursements; celebrity honoraria—payments to celebrity entertainers or groups and their production and/or tour managers to help defray day-to-day expenses; and other direct costs—tour production and equipment rental costs; travel costs to include commercial airfare, car rental or bus fares; lodging and per diem if authorized by DOD’s Joint Travel Regulations; miscellaneous expenses such as shipping, visas, and equipment repair or replacement for celebrity tours; and a 19 percent management fee, calculated using the total of other direct costs expended for noncelebrity tours. AFEO and the Air Mobility Command used appropriated O&M funds to pay directly for USO tour-related expenses, such as commercial airfares, visas and passports, and military airlift services. As table 2 shows, during fiscal year 2002 alone, we identified direct payments that totaled at least about $1.8 million. However, because of record-keeping limitations, AFEO officials could not assure that these amounts represented all direct payments. AFEO used its centrally billed account to pay about $783,000 for its personnel travel expenses and commercial airfares for USO personnel and tour entertainers; its purchase card account to pay around $2,500 for visas, passports, and shipping expenses for entertainment equipment; and its appropriated funds cite to make direct payments totaling about $602,200 for its personnel travel expenses and airlift services provided by the U.S. Air Force, Air Mobility Command. We also identified about $412,000 that the Air Mobility Command paid directly for airlift services for one USO tour. According to AFEO and Air Mobility Command officials, the command’s airlift services included the movement of passengers and baggage either on regularly scheduled flights or on special assignment airlift missions from designated U.S. stateside military locations to overseas military locations. These special assignment airlift missions involve chartering a military aircraft for a specific purpose. DOD components provide nonappropriated support largely in the form of in-kind goods, services, and infrastructure, such as food and refreshments, Internet and telephone access, and free office space, lodging, and some performance facilities, to help sustain USO’s overseas tours. We could not determine the total amount of appropriated and nonappropriated support to USO’s activities because of limitations in DOD’s and USO’s record-keeping systems. Specifically, we were unable to identify the total value of appropriated support for the fiscal year 2000 through 2002 period because DOD’s records were incomplete. For example, AFEO could not readily provide an accurate accounting of contract reimbursements or direct payments for charges to its centrally billed and purchase card accounts, primarily because it did not track and identify which transactions were for USO celebrity tours and which transactions were for noncelebrity tours that did not involve USO. (Most federal funds that are provided to support USO’s activities are provided for celebrity tours. The cost of noncelebrity tours is paid by AFEO.) Our audit of AFEO’s purchase card transactions confirmed that one could not distinguish between USO and non-USO activities. Without such detail, AFEO could not provide complete reports on funding for USO’s activities. During our audit, AFEO provided us with total amounts for contract reimbursements and some direct payments for fiscal year 2002, but it could not ensure that the totals included all appropriated funds provided in support of USO’s overseas tours. Moreover, AFEO could not provide the same information for fiscal years 2000 and 2001 because the records for those years were less complete, and the time and resources required to gather and verify the information were more than AFEO could expend given the unit’s workload. Additionally, AFEO could not provide data on how much appropriated funds were spent for military airlift services to support USO’s overseas tours because neither AFEO nor the Air Mobility Command has record-keeping systems to aggregate or report the needed information. For example, the command’s records can track and report all airlift services charged to AFEO, but those records do not indicate whether the services were provided to support USO’s tours, nor do they differentiate between celebrity and noncelebrity tours. Furthermore, neither AFEO nor the Air Mobility Command maintains records of the cost of airlift services that other U.S. military units (such as the Army and the Navy) provided in support of USO’s tours. We also could not identify the monetary value for other support derived from appropriated funds, such as transportation, free lodging, and some office and performance facilities provided by military units other than the Air Mobility Command. We could not identify the value of this support because neither DOD nor its components have record-keeping systems to aggregate or report the needed information. Finally, we could not identify the value of DOD’s nonappropriated support to USO, provided largely through in-kind contributions that included goods (e.g., food and refreshments), services (e.g., Internet and telephone access), and infrastructure support (some performance facilities) again, because neither DOD nor its components have record-keeping systems to aggregate or report the needed information. Furthermore, USO’s records for in-kind contributions do not clearly identify all private sector and DOD contributions. DOD and USO did not have sufficient financial and management controls in place to provide reasonable assurance that all appropriated funds were used appropriately. DOD properly awarded grant funds to USO, and USO appropriately administered these funds. However, USO did not require its independent auditor to fully test internal controls over grant funds or funds reimbursed by DOD, as required under grant and contractual agreements with DOD. For support provided through contract reimbursements and direct payments, AFEO lacked clearly written supplemental guidance regarding allowable expenses, effective management oversight in reviewing USO invoices, and adequate procedures for capturing reimbursable expenses. In some cases, these weaknesses resulted in inappropriate expenditures of funds. Specifically, we found problems with expenditures totaling about $433,000, including approximately $86,000 in improper expenditures, $3,000 in questionable expenditures, and $344,000 for unsupported expenditures. As a result of our audit, AFEO officials told us they have initiated several actions to improve financial and management controls and to recover funds from USO. During fiscal years 2000 through 2002, DOD awarded about $20.8 million in congressionally appropriated grants to USO. DOD properly transferred these funds. Specifically, before transferring funds, it entered into grant agreements with USO that included conditions for the use of these funds. For example, these agreements allowed USO to deposit the funds in the Spirit of Hope Endowment Fund or use any investment income earned from the funds for operational expenses. The agreements also set forth administrative and accounting requirements, to include compliance with the Office of Management and Budget (OMB) Circular A-133, Audits of States, Local Governments, and Non-Profit Organizations, as revised June 1997, which implements the Single Audit Act, as amended. The Single Audit Act is intended to promote sound financial management, including effective internal controls over federal funds. The single audit is an important tool utilized by federal agencies— including DOD—to monitor federal awards to nonprofit organizations and ensure that the federal funds are properly used. OMB Circular A-133 §_.500 requires an audit of the financial statement(s) for the program receiving federal funds in accordance with generally accepted government audit standards. The audit should be an organizationwide audit that focuses on the recipient’s internal controls and compliance with laws and regulations governing federal awards and be designed to test the program’s internal controls in a manner sufficient to illustrate that a low level of risk exists for the program. Furthermore, OMB Circular A-133, subpart B, §.200, requires nonfederal entities expending $300,000 or more a year in federal awards to have a single or program-specific audit conducted for that year in accordance with the provisions of the circular. Specifically, §.205 states that the determination of when an award is expended should be based on when the activity related to the award occurs. Generally, the activity pertains to the expenditure or expense transactions associated with grants. Specifically, the cumulative balance of federal awards for endowment funds, which are federally restricted, is considered expended in each year in which the funds are restricted. Consistent with the grant agreements, USO deposited the entire $20.8 million in grant funds in investment accounts designated specifically for the Spirit of Hope Endowment Fund, and used investment income earned on these funds for operational expenses. With respect to these deposits, USO invested the funds in income-producing assets such as stocks, bonds and U.S. Treasury bills. USO used about $333,000 drawn from investment income for operational expenses, and the entire amount of deposited grant funds remained invested. However, USO did not fully comply with the agreements’ audit requirements in identifying the scope of work to be performed by its independent auditor in performing annual audits. While USO arranges for its independent auditor to perform an annual audit, this audit focuses on verifying the sources and accuracy of amounts included in USO’s financial statements and does not comprehensively test internal controls on the receipt and use of grant funds or document tests performed as required by OMB Circular A-133. USO officials initially believed there was no need for an audit that complied with the Single Audit Act, since it spent only investment income from the grant funds and none of the actual grant funds. Based on our review, USO officials now agree that the act applies and that the annual audit should be performed in accordance with the act’s requirements and OMB Circular A-133. For contract reimbursements and direct payments, we found significant problems with DOD and USO controls over these funds. For example, AFEO lacked clearly written supplemental guidance regarding allowable expenses, effective management oversight in reviewing USO’s invoices, and adequate procedures for capturing reimbursable expenses. Also, similar to the grant funds, USO did not fully comply with audit requirements contained in its contracts with DOD. At the time of our audit, the guidance in effect concerning the expenses AFEO will pay in support of USO’s overseas tours was not sufficiently detailed to provide clear, consistent instructions to be followed by AFEO or USO. This guidance included the contract agreement between AFEO and USO, general rules regarding AFEO’s direct payment accounts, federal acquisition and travel regulations, and DOD Instruction 1330.13. AFEO refers to the aforementioned guidance in paying for USO overseas tour expenses through contract reimbursements and direct charges to its centrally billed and purchase card accounts. However, as described below, we found several weaknesses in the guidance. Contract reimbursements. The contract between AFEO and USO identifies the general categories of tour-related expenses for which USO can be reimbursed to include administrative support services; honoraria; and other direct costs such as production support/equipment rental, travel, lodging, and miscellaneous expenses. The contract contains numerous clauses and statements that indicate reimbursements will be made in the accordance with Joint Travel Regulations and the Federal Acquisition Regulation. However, the contract is not specific concerning the types of costs—such as the type of production support and other incidental direct costs—and the supporting documentation needed to ensure that AFEO only pays for costs that are allowable and proper. AFEO officials stated that they follow additional policies related to the allowable contract reimbursements for tour-related expenses, such as “thank you” dinners, but these policies are not documented in writing. Centrally billed account. AFEO stated that it uses the account primarily to pay for commercial airfares for USO personnel and entertainers covered under invitational travel orders. Federal travel regulations contain stringent circumstances under which first-class and business-class travel can be authorized. However, according to AFEO and USO officials, neither has more detailed, written, and program specific guidance to determine when and how USO will pay for first- or business-class travel. Other direct charges. AFEO provides additional support to USO by directly charging the cost of travel-related expenses, such as visas and passports, to its purchase card account, and by allowing its O&M funds account cite to be charged for Air Mobility Command airlift services. However, AFEO has no specific program guidance regarding how USO should be billed for unauthorized travelers on Air Mobility Command flights. Furthermore, DOD Instruction 1330.13, last updated September 8, 1985, establishes policy and assigns responsibility for carrying out the Armed Forces Professional Entertainment Program for entertaining troops overseas. This instruction states that the Secretary of the Army has responsibility for administering the program; however, the Air Force assumed responsibility in fiscal year 1997. An AFEO official acknowledged that this instruction is out of date. Also, this policy lacks clear statements regarding expenses that should be paid by AFEO and USO, respectively. The lack of sufficient management oversight of funds provided to USO was also a key internal control problem. For example, AFEO officials generally did not closely review or question expenses USO submitted for reimbursement. Additionally, AFEO’s review and reconciliation process for its centrally billed account and billings from the Air Mobility Command was not sufficient to identify airlift expenses that should be charged to USO. Furthermore, during our audit of contract files at the Air Force contracting office responsible for administering the contracts between AFEO and USO, we found no evidence of contract reviews. An Air Force contracting official stated its office sometimes questioned the need for some expenses for celebrity tours when modifications to the contracts were requested. At these times, the expenses were questioned because the supporting documentation provided to the contracting office by AFEO was not always adequate. However, according to the Air Force contracting officer currently responsible for the contracts, the existing workload and higher priorities require her to perform more detailed oversight of high- dollar defense contracts. Because celebrity tour costs generally ranged from $10,000 to $300,000, they are given lower priority for contract oversight. Furthermore, we found that USO did not perform the type of audit required under the terms of its contracts with AFEO. Similar to the grant agreements, the contracts contain a requirement for a single audit that would focus on USO’s internal controls as they relate to the federal funds provided through contracts to USO to support the Armed Forces Entertainment Program. USO signed the contracts with AFEO. These contracts were to provide celebrity entertainment for U.S. armed forces at military installations overseas, on a fixed price and cost reimbursable basis. When USO signed these contractual agreements, it agreed to comply with all contractual requirements. These contractual agreements set forth accounting requirements to be met in accordance with Federal Acquisition Regulation 52.215-2, Alternate II, which requires compliance with OMB Circular A-133. As previously discussed, this circular implements the Single Audit Act, as amended, and is intended to promote sound financial management, including effective internal controls over federal funds. Our review of USO’s audited financial statements, discussions with the independent auditor responsible for performing the audit, and discussions with USO officials indicated that the single audit requirement set forth in the contractual agreements was not met. As discussed previously, USO arranges for an annual audit of its financial statements, but this audit does not include comprehensive testing of internal controls and the documentation of tests performed that is required by OMB Circular A-133. USO officials initially believed there was no need for an audit that complied with the Single Audit Act, since USO is merely a vendor providing services for AFEO, but now, based on our audit, it agrees that such an audit is required. In the absence of strong internal controls, we found numerous instances where AFEO paid for improper, questionable, and unsupported expenses in support of USO’s overseas celebrity tours. Based on our limited testing of six celebrity tour files, our analysis of AFEO’s centrally billed account, and our examination of Air Mobility Command records, we identified a total of about $433,000 in problem expenditures during fiscal years 2000 to 2002 including improper and questionable expenses totaling around $89,021 and unsupported expenses totaling approximately $344,000. We defined an expense as improper when an item was not authorized or properly justified in accordance with the contracts between AFEO and USO, the Joint Travel Regulations and the Joint Federal Travel Regulations issued by DOD, and the Federal Travel Regulation issued by the General Services Administration. For example, we found improper reimbursements for expenses such as alcoholic beverages, meals, lodging, and duplicate billings for administrative services. AFEO also inappropriately paid for first-class and business-class travel and some military airlift services. We identified numerous examples of questionable payments of USO tour costs by AFEO for items such as limousine services, hotels, and airport VIP lounge services. We defined a questionable payment as any item that was reimbursed without documentation showing that the item was necessary for official government business under the Armed Forces Entertainment Program. We also identified numerous unsupported payments. We defined an unsupported payment as any item that was reimbursed without documentation detailing the nature of the expense and the way the price for the expense was determined. We found payments for improper expenses for items such as unallowable alcoholic beverages, meals, and lodging, honorarium, and production support for an entertainer who did not participate in a tour for which expenses were reimbursed, and a duplicate billing for administrative services. Moreover, AFEO inappropriately paid for first-class and business- class travel and some military airlift services. AFEO acknowledged that these expenses should not have been reimbursed or paid. For example, AFEO explained that meal expenses for celebrities receiving honorarium are not reimbursable because the honorarium is intended to help defray the cost of meals and other essentials, and the invitational travel orders we reviewed specifically stated that meal expenses were not authorized. Expenses for alcoholic beverages are never allowable in conjunction with government travel. The cost for first-class travel, and the cost for unauthorized travelers on Air Mobility Command airlifts, should have been borne by USO. Table 3 highlights the improper payments we identified. Improper expenses of particular note are explained in more detail below: Duplicate billing for administrative services. In calendar year 2002, AFEO paid USO twice for administrative expenses associated with overseas tours. We identified improper payments totaling about $9,000. A USO contract employee, responsible for preparing the expense reports for overseas tours, included invoices for these services in several of the tour files we audited. According to the contract employee, USO officials directed that the invoices be submitted to AFEO for payment. The Air Force contracting officials responsible for managing the contract stated that in accordance with the terms of the contract between AFEO and USO, USO is paid a monthly administrative fee that covers numerous administrative tasks, including preparing the expense reports for USO tours. Contracting officials stated that the monthly administrative fee included the cost for all accounting services, including those performed by the contractor. Neither AFEO nor USO could provide an estimate of how long the double billings occurred. However, one USO official believed that the contract employee started to submit the invoices with the inception of the contract in 1999 and ended with the termination of the contractor’s services in May 2003. Based on our review of documentation provided by USO for calendar years 2001 and 2002, the amount billed could have totaled $78,000. We found no indication that the individual was paid twice for the services performed. Improper payments for first-class and business-class travel. Our analysis of AFEO’s centrally billed account for fiscal year 2002 and selected tour files revealed numerous instances of improper payments by DOD for first-class and business-class travel totaling about $66,000. These first-class and business-class airline tickets were considered improper because they were not authorized and/or properly justified in accordance with the Joint Travel Regulations and the Joint Federal Travel Regulations issued by DOD and the Federal Travel Regulation issued by the General Services Administration (GSA). AFEO’s policy, while not written, is to authorize up to business-class travel for overseas flights for USO celebrity tours. According to an AFEO official, AFEO’s policy is to not authorize first-class travel, and the Director of Services, Air Force Office of Installations & Logistics, the office to which AFEO reports, is required to approve business-class travel. If first-class travel is requested, USO is supposed to pay for the cost of the upgrade from business-class to first-class. However, contrary to the stated policy and statements made by AFEO officials, this was not always the case. In each case, we found AFEO purchased and paid for either the unauthorized first-class or business-class ticket. We found no instances in which AFEO requested reimbursement from USO for the cost difference between business-class and first-class airline tickets. Further, neither AFEO nor USO could provide any documentation that indicated that USO paid the additional cost of first-class travel at the time the tickets were purchased. USO officials stated that they were unaware that first-class airline tickets were charged to AFEO’s centrally billed account for USO tours. USO officials stated they would have reimbursed AFEO for the cost of the upgrade from business-class to first-class if AFEO had notified them or if they were provided documentation of the first-class charges. AFEO officials acknowledged that closer scrutiny of the documentation received from USO should have identified those instances in which first-class and business-class airline tickets were improperly paid by AFEO. Additionally, AFEO noted that the monthly reconciliation of the centrally billed account statement to the individual airline ticket transactions should have identified the discrepancies we found. Our review of the monthly reconciliations showed that first-class travel was clearly identified, but AFEO failed to seek reimbursement from USO. A more in-depth discussion of our analysis of the improper first-class and business-class travel we identified is detailed in appendix II. Improper payments for Air Mobility Command Airlift Services. Our analysis of AFEO-issued invitational travel orders and Air Mobility Command billing data for airlift services showed that AFEO paid around $9,000 for airlift services provided by the Air Mobility Command, for individuals traveling on “no cost” travel orders. According to AFEO, no cost travel orders are issued to USO tour support personnel and some entertainers in those cases where AFEO has stated the government will not pay the transportation costs. These orders enable certain support personnel or guests of entertainers to utilize government transportation with the costs of their transportation being the ultimate responsibility of USO. In cases where AFEO has paid for travel conducted on no cost orders, it is necessary for USO to reimburse AFEO. According to AFEO, these improper charges and payments occurred because it was unaware that the travel was being billed to its appropriated fund cite. An AFEO official believed that the Air Mobility Command was billing USO directly for the airlift services. According to an Air Mobility Command official, its billing system recognizes airlift charges incurred by AFEO personnel and personnel traveling in support of AFEO’s mission, but the system does not identify if the travel is USO related. Nor can the Air Mobility Command bill a nongovernmental entity for airlift services unless that entity has an account in the command’s billing system. We identified numerous examples of questionable payments of USO tour costs by AFEO totaling about $3,000, as shown in table 4. More specifically, we found that AFEO paid for 19 hours of limousine services from hotels in the Washington, D.C., area to Andrews Air Force Base, Maryland, at a cost of $1,656 before an overseas tour began and several USO thank you dinners for the USO entertainers at the end of a tour. We could find no documentation to indicate why these expenses were necessary. For example, concerning the thank you dinners, AFEO officials said it was their policy, although unwritten, to reimburse USO for one dinner per tour. Our audit of the documentation indicated that this practice was inconsistently applied. In one instance, we found that AFEO disallowed a thank you dinner for one tour, but it paid for several meals that were classified as thank you dinners for another tour. Additionally, the documentation was not always adequate to identify whether these expenses were for meals for celebrities or for other individuals on the tour. For example, we found that tour managers and a USO tour producer’s meals were reimbursed over a number of days. An AFEO official acknowledged that there was no existing guidance that identified these items as allowable expenses. AFEO officials told us that they plan to discontinue the practice of reimbursing USO for thank you dinners. We identified numerous examples of unsupported payments by AFEO totaling approximately $344,000 for production support for USO tours. Table 5 highlights the unsupported payments we identified. We found that supporting documentation for the six celebrity tour files we audited was inadequate for a number of invoices, and therefore AFEO had no assurance that the reimbursed costs were proper. We asked AFEO to provide additional documentation on these invoices. AFEO could not provide the necessary documentation and stated that this was the only documentation USO provided. We asked USO for detailed support for a number of selected invoices. USO did not have support readily available in its records. In response to our request for additional documentation, USO contacted the vendors and received details on several invoices. USO provided additional support for $43,910 of the $343,910 included in table 5. For the largest case in our testing, AFEO reimbursed $216,750 for production support based on a single entry on an invoice. In contrast, our examination of another invoice for production support included an itemized list of specific items such as microphone stands, speakers, and stage supports. Additionally, based on our audit of five noncelebrity tours, we found that documentation was far more comprehensive in support of the expenses paid by AFEO. Additionally, in some instances we were unable to identify which individuals received celebrity honoraria. We traced names from the invitational travel orders on the six tours audited but were unable to verify which individuals were being paid honoraria and which ones were not. In some cases, individuals who were part of a celebrity’s entourage were classified as celebrities and received honoraria while others were not. AFEO agreed that it was not always possible to identify which names listed on invitational travel orders received honoraria. In one instance, honoraria and production support costs were charged for 13 individuals, but the supporting documentation indicated that only 12 individuals participated in the tour. An AFEO official stated that the individual’s itinerary must have changed and acknowledged that this should have been documented in the file. Based on available documentation, AFEO was charged $900 in honoraria and production support costs for an individual who did not participate in the tour. As a result of our analysis, AFEO verified that this individual did not participate in the tour, and it is seeking reimbursement from USO. USO officials acknowledged the problems we identified with the transactions we reviewed. They stated they did not have a clear understanding of AFEO’s policy as to which expenses were reimbursable and which ones were not. They stated that they submitted invoices based on prior verbal agreements and past practices with AFEO. USO officials stated that AFEO’s practice over the last several years was inconsistent and that reimbursement for certain expense items was “hit or miss” from one tour to the next. According to USO officials, it was their intention to submit invoices and vouchers for expenses in accordance with federal laws and regulations. However, because they had no specific instructions identifying which costs were allowable and which costs were not allowable, it was sometimes frustrating for them to decide what to include as an expense item in an invoice package. USO and AFEO acknowledged that they need better policies and procedures to provide reasonable assurance that expenses are authorized in an appropriate manner and are reimbursable based upon the contracts between the organizations. As a result of our audit, USO and AFEO officials told us they have initiated some actions to improve accountability and controls over federal funds used to support USO’s activities and to recover funds paid by AFEO that USO should have paid. For example, a USO official told us USO is in the process of developing written guidance for its celebrity tour managers and accounting staff that specifies those expenses that are reimbursable under the contracts with AFEO and those that are not. AFEO officials told us that to improve financial and management controls, their office, in conjunction with the Air Force Directorate of Services, is in the process of drafting an operating instruction for AFEO. They stated that this operating instruction will address AFEO roles and responsibilities, overseas areas served, points of contact, promotional package selection process, tour projections, authorized reimbursements, invitational travel orders, passports, visas, immunizations, military and commercial transportation, final payment process, and tour evaluation forms. Additionally, according to AFEO officials, they have taken the following actions. Established procedures to track those contract reimbursement and purchase card transactions used to fund USO celebrity tours versus noncelebrity tours. Created a listing of reimbursable items, specified by contract line item number, allowed and the required documents needed for final payment processing. The listing was provided to USO, as well as to the U.S. Air Force contracting office responsible for administering the contracts between AFEO and USO for a modification to the basic contract. Improved controls over the purchase of airline tickets charged to the centrally billed account by implementing procedures for processing requests for approval of upgrades to business-class travel through the U.S. Air Force, Director of Services. According to AFEO officials, they now document cost comparisons of economy-class airline tickets versus business-class travel in the AFEO business-class authorization letter. A copy of the approved upgrade letter will be provided to the contract travel office and maintained in the individual tour folders with copies of the annotated invitational travel orders. For those portions of overseas travel that are upgraded to business-class because no other class of travel is available, the commercial travel office will certify these circumstances by entering a statement on the itinerary as required by the Joint Travel Regulations. No prior approval is necessary under these circumstances. USO will fund any domestic portion of travel that incurs additional costs above economy- and/or coach-class standards. If any other type of upgrade is provided, at no additional cost to AFEO, the change in travel class will be noted with a memorandum for the record and filed in the tour folder. Improved oversight of expenses reimbursed to USO for overseas tours. According to AFEO officials, now, at least three individuals are reviewing expense packages for payment certification. First, the applicable AFEO circuit manager reviews the voucher package to assure receipts and requests for reimbursement match the itinerary and are appropriate. Second, the AFEO financial advisor reviews the package to assure reimbursements are authorized and properly documented, then signs the package as the acceptance officer. Third, either the AFEO administrative assistant or the AFEO deputy director performs a final review and certifies the package for payment. The Defense Finance and Accounting Form 250 is prepared and certified by two signatures. Additionally, as of September 2003, AFEO had recovered about $19,000 in improper and questionable payments it made to support USO overseas tours. We have not audited any transactions since AFEO officials stated these actions have been taken and thus cannot conclude whether these actions have actually taken place or have resulted in improved financial and management controls. As U.S. armed forces continue to be actively engaged in operations throughout the world, it is important that troop morale is maintained at high levels. USO’s overseas entertainment tours have provided quality entertainment to the troops, and DOD’s financial and in-kind support has been key to the Armed Forces Professional Entertainment Program’s continued success. When a nongovernmental organization, such as USO, receives federal funds to assist a government organization, such as DOD, that organization is accountable for the proper use of the funds. A key factor in helping achieve that accountability is to implement appropriate internal controls. However, our audit found that DOD’s program lacks effective financial and management controls to provide reasonable assurance that federal funds are used consistent with the terms specified in grant and contract agreements. Neither AFEO nor USO can determine the total amount of financial or in-kind support DOD provides to sustain USO’s overseas tours. Furthermore, without adequate supplemental guidance to identify allowable costs for overseas tours and effective management oversight, AFEO does not have reasonable assurance that it is paying for only allowable costs and that appropriated funds are being spent in accordance with federal laws and regulations. Moreover, USO’s failure to fully comply with audit requirements in grant and contract agreements reduces DOD’s assurance that USO has adequate internal controls over federal program funds, leaving the program vulnerable to fraud, waste, and abuse. Had USO’s independent auditor fully tested internal controls, the problems we identified might have surfaced. AFEO officials stated they have taken action to improve management oversight during the review of invoice packages and to develop written policies and procedures consistent with DOD and federal travel regulations. Although these actions, if implemented, should assist AFEO in achieving a stronger control environment, an earnest commitment by DOD and USO management is also needed to ensure proper controls and use of DOD funds. To improve financial and management controls over support provided to USO, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in consultation with the Secretary of the Air Force, to take the following actions. Develop and implement a record-keeping system capable of reporting all appropriated and nonappropriated funds, including all in-kind goods, services, and infrastructure provided by DOD in support of USO overseas tours and operations. Among other things, this system should clearly identify airlift services provided in support of USO tours. Take steps to ensure USO complies with the Single Audit Act as stipulated in its grant and contractual agreements with DOD, which require an annual audit that tests internal controls over federal funds to assess control risk. Develop and consistently implement supplemental guidance, in accordance with contract terms, and federal travel and acquisition regulations, to identify allowable expenses and reimbursements and appropriate documentation for travel-related USO expenses, including commercial air travel, honoraria, and services and equipment provided for USO. Identify all expenses AFEO inappropriately paid, which should have been paid by USO, and request that USO fully reimburse AFEO for the expenses. Arrange for DOD’s Inspector General to perform internal control audits periodically to determine if the control weaknesses we identified are resolved, and report the results of these audits to the Secretary of Defense and the Secretary of the Air Force. In commenting on a draft of this report, the Principal Deputy Under Secretary of Defense for Personnel and Readiness concurred with four of our recommendations and partially concurred with the fifth. The Principal Deputy Under Secretary indicated that actions are underway or completed to address our recommendations and correct the deficiencies noted in our report. Furthermore, although he concurred with our first recommendation, he acknowledged that DOD financial systems do not support an automated means for reporting the type of information we suggested. However, he noted that AFEO continues to implement and improve its record-keeping systems to clearly identify and report USO tour costs by establishing a separate Bank of America centrally billed account for all commercial transportation costs associated with USO celebrity tours; a separate purchase card account for visas, excess baggage, printing, shipping, and miscellaneous costs associated with USO celebrity tours; and an accounting line in the Air Mobility Command billing process to identify, where possible, military airlift transportation costs associated with USO celebrity tours. The Principal Deputy Under Secretary further indicated AFEO has taken action to identify and recoup expenses inappropriately reimbursed to USO, and that DOD Instruction 1330.13, Armed Forces Entertainment, will also be revised to require the military services to submit to AFEO an annual report identifying appropriated funds, nonappropriated funds, and in-kind goods or services provided to USO. According to the Principal Deputy Under Secretary, all actions are to be completed by April 30, 2004. Finally, the Principal Deputy Under Secretary partially concurred with our final recommendation, agreeing that periodic internal control audits are necessary to determine whether control weaknesses we identified are resolved. He believes, however, that USO’s independent auditor’s annual audit, performed in accordance with the Single Audit Act, rather than audits performed by the DOD Inspector General, would meet the requirement to test internal controls over federal funds to assess control risk, and that the DOD Inspector General would provide periodic oversight of the single audits performed for USO. We agree that these actions meet the intent of our recommendation. The Principal Deputy Under Secretary’s comments are included in appendix III of this report. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees with jurisdiction over DOD’s budget, as well as to the Secretary of Defense, the Secretary of the Air Force, and the President and Chief Executive Officer of USO. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Sharon L. Pickup on (202) 512-9619 or Greg D. Kutz on (202) 512-9505 if you or staff have any questions. You may also contact George F. Poindexter, Assistant Director, on (202) 512-7213, or Darby W. Smith, Assistant Director, on (202) 512-7803. Major contributors to this report are listed in appendix IV. We reviewed the Department of Defense’s (DOD) Armed Forces Entertainment Program and its partnership with the United Services Organization (USO) in providing U.S. armed forces with celebrity entertainment overseas. We collected, reviewed, and analyzed relevant program information and conducted interviews with DOD and USO officials responsible for administering the Armed Forces Entertainment Program, specifically officials from the Office of the Under Secretary of Defense (Personnel and Readiness), Morale, Welfare and Recreation Policy; Armed Forces Entertainment Office (AFEO); Defense Supply Service—Washington, Department of the Army; 11th Contracting Squadron, Department of the Air Force, Bolling Air Force Base, District of Columbia; and USO. Additionally, we interviewed personnel with the Deloitte and Touche Accounting Firm, the independent auditing firm responsible for auditing USO’s annual consolidated financial statements and supplemental schedules. To determine the source and amount of federal funding provided to support USO, we reviewed and analyzed relevant congressional authorization and appropriations acts. We also reviewed and analyzed applicable grant agreements; contract negotiation files; DOD and Air Force operations and maintenance budget data; USO’s annual audited financial statements and supporting documentation and annual financial reports; AFEO financial records, including the centrally billed and purchase card accounts; and Air Mobility Command billing data for passengers and baggage for selected airlift missions. We discussed discrepancies that existed among the various financial records with AFEO, USO, Air Force Contracting Squadron, and Air Mobility Command officials. Other than for the grants, we were unable to obtain complete appropriated funding data for fiscal years 2000 through 2002 for federal funds provided to USO for overseas tours. We could not obtain complete funding data because of limitations in DOD’s record-keeping systems, which did not differentiate between costs for celebrity versus noncelebrity tours. Therefore, AFEO officials agreed to take the steps necessary to provide, to the extent possible, complete funding data for fiscal year 2002. However, AFEO officials could not assure us that the totals included all appropriated funds provided in support of USO overseas tours. Additionally, they told us they could not provide the same information for fiscal years 2000 and 2001, because the records for those years were less complete, and the time and resources required to gather and verify the information were more than could be expended given the unit’s current workload. DOD officials could not provide sources and amounts for total nonappropriated support provided to USO because their recording-keeping systems do not aggregate or report the needed information. We reviewed USO records for in-kind contributions, but those records do not clearly distinguish private sector contributions from federal contributions. To assess the adequacy of internal controls in place to provide reasonable assurance that appropriated federal funds are used consistent with the terms specified, we reviewed applicable federal laws and regulations, DOD policies and procedures, and GAO’s Standards for Internal Control in the Federal Government. Additionally, we audited the contract between USO and AFEO. We interviewed USO and AFEO officials to gain an understanding of internal controls, and reviewed the payment process for celebrity and noncelebrity tours. In gathering this information, we concluded that internal controls over the payment process were ineffective, and therefore we limited our auditing to a nonrepresentative selection of tours. We audited selected USO tour transactions to evaluate the design and implementation of key internal control procedures and activities. We selected 11 tours—6 celebrity and 5 noncelebrity tours. We traced expenses that were paid by AFEO to supporting invoices and receipts, requesting additional documentation from AFEO as well as from vendors for certain transactions. In addition to our audit of selected transactions, we looked at whether indications existed of potentially improper and questionable transactions as well as invoices that were reimbursed without adequate documentation. We discussed discrepancies with AFEO, USO or contract officials at Bolling Air Force, District of Columbia, who were responsible for administering the contract between USO and AFEO. Additionally, we interviewed the USO contract accountant to determine the relationship between accounting fees collected under the contract and those billed as part of tour expenses that were submitted to AFEO by USO for reimbursement. Based on our initial review of the tour files, we also audited AFEO’s centrally billed and purchase card accounts for fiscal year 2002. We audited AFEO’s centrally billed account for fiscal year 2002 to determine if the amount spent on first-class and business-class airline travel in support of USO tours was in accordance with DOD and federal policies and procedures. To assess the magnitude of first-class and business-class travel, we isolated those transactions billed to AFEO’s centrally billed account specifically related to airline travel. We created a new file that contained only the first-class and business-class travel billed to AFEO’s centrally account. The airline industry uses certain fare and service codes to indicate the class of service purchased and provided. The database contained transaction specific information, including the fare and service code to price the tickets AFEO purchased. Using data-mining techniques, we identified the fare basis codes that corresponded to the issuance of first-, business-, and coach-class travel. Using these codes, we selected all airline transactions that contained at least one leg in which AFEO paid for first-class and business-class travel accommodations. We estimated the cost of coach travel using the government rates established by General Services Administration (GSA). For flights not covered by GSA, we estimated coach travel using the lowest current rates identified from Expedia.com. We also analyzed purchase card transactions for fiscal years 2001 and 2002 to provide reasonable assurance that charges were in accordance with DOD policies and procedures and in support of USO tours. We also reviewed USO’s independent auditor’s reports and management letters for calendar years 1996 through 2001, as well as the independent auditor’s work papers for audit work related to USO transactions with AFEO for calendar year 2001. The 2001 audit was the most recently completed audit that was available through the end of our field work. In performing this audit, we used the same accounting records and financial reports DOD and USO use to manage the Armed Forces Entertainment Program. We did not independently determine the reliability of all the reported financial information. However, our recent audits addressing the reliability of DOD’s financial statements question the reliability of reported financial information. Furthermore, our recent audits of DOD’s travel card and purchase card accounts identified weaknesses in the overall control environments and breakdowns in key controls relied on to manage these programs, leaving them vulnerable to fraud, waste, and abuse. We performed our audit from March 2003 through September 2003 in accordance with generally accepted government auditing standards. Table 1 details our analysis of the improper first-class and business-class travel we identified based on our limited testing. Without authorization or adequate justification, these cases illustrate the improper use of first-class and business-class travel and the resulting increase in travel costs. Following the table is more detailed information on some of these cases. Example 1 involved five individuals traveling first class at a cost to the government of $16,658. An audit of the tour files and the travel order indicated that the travel order specifically states that travel at government expense shall not exceed the cost of common carrier (i.e., the rate authorized under the government contract). However, the individuals were issued first-class tickets for this trip, resulting in an additional cost to the government of $14,978 compared to an estimated total cost of about $1,680 for eight coach tickets. Example 2 involved six individuals traveling first class at a cost to the government of $8,397. An audit of the tour files and the travel order indicated that the travel order specifically states that travel at government expense shall not exceed the cost of common carrier. However, the individuals were issued first-class tickets for this trip, resulting in an additional cost to the government of $6,496 compared to an estimated total cost of about $1,901 for six coach tickets. This tour also had seven individuals traveling business-class at a cost to the government of $13,488 for domestic flights. According to AFEO, business-class is only authorized for overseas flights, not domestic flights. This resulted in an additional cost to the government of $12,088 compared to an estimated cost of about $1,400 for coach-class tickets. Example 5 involved two individuals who traveled first class from New York–LaGuardia to Jacksonville, Florida. Supporting documentation indicates that business-class was authorized. The cost of two business- class tickets amounted to $720 compared to the two first-class tickets of $1,556. Without authorization or valid justification, the additional $836 spent on the first-class ticket was improper. Furthermore, our audit showed that the difference in the cost of first-class travel and the cost of economy class can be significant. For example, during a review of one tour, we found that the cost of one first-class round trip ticket was $3,982, whereas an economy-class airline ticket for the same trip cost $280. GSA and DOD travel regulations specify stringent circumstances under which premium-class travel (e.g., first-class, business-class) can be authorized. For example, the Joint Travel Regulations (JTR) and the Joint Federal Travel Regulations (JFTR) limit the authority to authorize first- class travel to the Secretary of Defense, his Deputy, or another authority as designated by the Secretary of Defense. Further, the delegation of authority to authorize and/or approve first-class travel is to be held at “as high an administrative level as practicable to ensure adequate consideration and review of the circumstances necessitating the first-class accommodations.” A DOD directive on transportation and management specifically states that the secretaries for personnel within the military services and secretariats are the approving authorities for first-class travel. The military service secretaries may delegate approval authority for first- class travel to under secretaries, service chiefs of staff or their vice and/or deputy chief of staff, and four-star major commanders or their three-star vice and/or deputy commander. The directive explicitly states that approving authority cannot be delegated to anyone lower than these officials. DOD and GSA policies also require that authorization for premium-class airline accommodations be made in advance of the actual travel unless extenuating circumstances or emergency situations make advance authorization impossible. Specifically, JTR and JFTR require that first-class accommodation be authorized only when: coach-class airline accommodations or premium-class other than first- class airline accommodations are not reasonably available; first-class airline accommodations are necessary because the employee and/or dependent is so handicapped or otherwise physically impaired that other accommodations cannot be used, and such condition is substantiated by competent medical authority; or first-class airline accommodations are needed when exceptional security circumstances require such travel. JTR and JFTR allow the transportation officer, in conjunction with the official who issued the travel order, to approve premium-class travel (i.e. business-class) other than first-class travel. DOD restricts premium-class travel to the following eight circumstances: Regularly scheduled flights between origin and destination provide only premium-class accommodations and it is certified on the travel voucher. Coach-class travel is not available in time to accomplish the purpose of the official travel, which is so urgent it cannot be postponed. The traveler’s disability or other physical impairment requires use of other than first-class service and the condition is substantiated in writing. Premium-class accommodations are required for security purposes or because exceptional circumstances make the use essential to the successful performance of the mission. Coach-class service on authorized and/or approved foreign carriers does not provide adequate sanitation or meet health standards. Premium-class accommodations would result in overall savings to the government because of subsistence costs, overtime, or lost productive time that would be incurred while awaiting coach-class accommodations. Transportation is paid in full by a nonfederal source. Travel is to or from a destination outside the continental United States, and the scheduled flight time (including stopovers) is in excess of 14 hours. However, a rest stop is prohibited when travel is authorized by premium-class accommodations. Both GSA and DOD regulations allow a traveler to upgrade to premium- class, other than first-class travel at personal expense, including through redemption of frequent traveler benefits. GSA also identified agency mission as one of the criteria for premium-class travel. Claudia J. Dickey, Stephen P. Donahue, Johnny R. Bowen, Wayne A,. Ekblad, Kenneth E. Patton, M. Jane Hunt, Nancy L. Benco, and Julio A. Luna made significant contributions to this report. | For more than 60 years, the United Services Organization (USO), in partnership with the Department of Defense (DOD), has provided support and entertainment to U.S. armed forces, relying heavily on private contributions and on funds, goods, and services from DOD. To assist USO, Congress, beginning in fiscal year 2000, provided a total of $23.8 million in grants to be awarded through DOD as seed money for an endowment fund. The availability of these funds to USO, along with DOD's ongoing support funded in its regular annual appropriations, represents a substantial financial commitment. GAO determined (1) the source and amount of DOD's support to USO in fiscal years 2000-2002 and (2) the sufficiency of internal controls to provide reasonable assurance that federal funds are used in an appropriate manner. GAO focused its audit on USO World Headquarters' activities and audited a limited selection of USO transactions for the 3 fiscal years. During fiscal years 2000 through 2002, DOD provided USO with substantial appropriated and nonappropriated support, but the total amount cannot be determined because of limitations in DOD's and USO's record-keeping systems. GAO identified at least $34.7 million in appropriated funds that DOD provided to support USO during fiscal years 2000 through 2002. Of this amount, $20.8 million was in congressionally appropriated grants to help USO establish the Spirit of Hope Endowment Fund to ensure the continuation of USO's programs and services. Another $12.1 million was for reimbursements to USO, and at least $1.8 million was paid directly by DOD for tour-related expenses such as commercial airfares, visas, and passports. DOD also provided other appropriated support, such as lodging and transportation. However, GAO could not determine the total monetary value of DOD's support from appropriated funds because neither DOD nor USO has record-keeping systems that aggregate the needed information. DOD also provides USO with nonappropriated support, largely in the form of in-kind goods (e.g., food), services (e.g., Internet access), and infrastructure support (e.g., performance facilities), to help sustain USO's overseas tours, but the same limitations precluded GAO from determining the total monetary value. DOD and USO did not have sufficient financial and management controls to reasonably ensure that all appropriated funds were used appropriately. DOD properly awarded grant funds to USO, and USO properly administered these funds. However, USO did not require its independent auditor to fully test internal controls over grants or funds reimbursed to USO by DOD, as required by its agreements with DOD. In terms of reimbursements to USO and direct payments by DOD, DOD lacked clearly written supplemental guidance regarding allowable expenses, management oversight in reviewing USO's invoices, and procedures for capturing reimbursable expenses. In some cases, these weaknesses resulted in inappropriate expenditures of funds. Based on limited testing, GAO found problems with payments totaling about $433,000, including about $86,000 in improper expenditures, $3,000 in questionable expenditures, and $344,000 for unsupported expenditures. Had USO's independent auditor tested internal controls, the problems GAO identified might have surfaced. As a result of GAO's audit, DOD stated it has initiated several actions to improve financial and management controls and to recover funds from USO. As of September 2003, DOD had recovered about $19,000 from USO in improper payments for overseas tour expenses. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 1997, all federal agencies have been subject to a common set of personnel security investigative standards and adjudicative guidelines for determining whether servicemembers, government employees, industry personnel, and others are eligible to receive a security clearance. Clearances allow personnel to access classified information categorized into three levels: top secret, secret, and confidential. The expected damage to national defense and foreign relations that unauthorized disclosure could reasonably be expected to cause is “exceptionally grave damage” for top secret information, “serious damage” for secret information, and “damage” for confidential information. Individuals who need access to classified information for extended periods of time are required to periodically renew their clearance (a reinvestigation). The time frames for reinvestigations are 5 years for top secret clearances, 10 years for secret clearances, and 15 years for confidential clearances. In addition to requiring different time frames for renewal, the different levels of clearances require that different types of background information be gathered and used in making the adjudicative decision about whether an individual is or is not eligible for a clearance (see table 1). Much of the information for a secret or confidential clearance is gathered through electronic files. The investigation for a top secret clearance requires the information needed for the secret or confidential clearance as well as additional data which are gathered through time-consuming tasks, such as interviews with the subject of the investigation request, references in the workplace, and neighbors. OPM officials estimated that the time required to gather information to complete initial investigations for top secret clearances is twice that needed for reinvestigations for top secret clearances and 10 times as much as that needed for initial investigations or reinvestigations for secret or confidential clearances. DOD estimated that adjudicators’ reviews of the longer investigative reports for top secret clearances also take three times as long as the reviews of investigative reports for determining eligibility for secret or confidential clearances. Moreover, if the clearance required for a position is upgraded from secret to top secret, the investigation and adjudication would need to be performed twice as often (every 5 years instead of every 10 years). We found that DOD has taken steps to address challenges found at all three stages of its personnel security clearance process, but many of the steps have not yet resulted in implementations that fully address the challenges. In the preinvestigation stage, DOD has begun decreasing the uncertainty in its projections of how many and what levels of clearances are required by identifying the clearances needed for military and civilian positions and developing software that will result in electronic submissions of clearance investigation requests to OPM. Regarding the second stage of the clearance process, OPM has been hiring investigative staff to address past personnel shortages and the resulting delays from having too few staff for the investigative workload. Adding thousands of staff could, however, result in continued timeliness problems as well as quality concerns until the staff gain experience. Regarding the adjudication stage, DOD’s Joint Personnel Adjudication System consolidated the databases for 10 DOD adjudication facilities to enhance monitoring of adjudicative decisions and time frames for renewing clearances, but a new law requires a governmentwide clearance database. At this time, DOD is uncertain about the number and level of clearances that it requires and has experienced problems submitting investigation requests, but the department has begun addressing these problems. DOD’s inability to accurately project such clearance requirements makes it difficult to determine budgets and staffing needs. DOD is addressing this problem by identifying the clearance needs for military and civilian positions, but no military service had completed this task as of May 2005. Similarly, in response to our May 2004 recommendation to improve the projection of clearance requirements for industry personnel, DOD indicated that it is developing a plan and computer software to have the government’s contracting officers authorize the number of industry personnel investigations required to perform the classified work on a given contract and link the clearance investigations to the contract number. Despite having 2 years between the time when OPM and DOD announced an agreement for the transfer of DOD’s investigative functions and personnel to OPM and when the transfer actually occurred, DOD cannot make full use of OPM’s Electronic Questionnaires for Investigations Processing (eQIP), the system used to submit materials required to start a background investigation. To overcome this challenge to the prompt and efficient submission of investigation requests, DOD is developing software that will convert the department’s submissions into the eQIP format. Also, OPM told us that about 11 percent of the February 2005 clearance investigation requests submitted outside of eQIP were returned to the requesting offices when missing or discrepant information could not be obtained telephonically. Converting a DOD request for investigation into a format that is compatible with OPM’s eQIP and obtaining missing or corrected data to open an investigation delays the completion of the clearance process. OPM does not monitor how many days elapse between initial submissions and resubmissions of corrected material and, therefore, does not include that time in its calculations of the average time required to complete an investigation. Until DOD implements the software currently being developed and fully determines its clearance requirements, the department will continue to encounter problems determining budgets and staff and minimizing the delays in completing the clearance process. DOD and the rest of the government serviced by OPM are not receiving completed investigations promptly, but recent initiatives may decrease these delays. For February 2005, OPM told us that it had more than 185,000 investigations governmentwide that had taken longer than its goals for closing cases: 120 days for initial investigations and 180 days for reinvestigations. The current goals for completing a case allow more time than did the DOD goals reported in our earlier work and, therefore, comparison of the investigation backlog size that OPM reported in February 2005 to the backlog size cited in our prior reviews would not provide any meaningful information. The Intelligence Reform and Terrorism Prevention Act of 2004 requires that not later than December 17, 2006, and ending December 17, 2009, each authorized adjudicative agency shall make a determination on at least 80 percent of all applicants for personnel security clearances within an average of 120 days—90 days to complete the investigation and 30 days to complete the adjudication—of receiving the security clearance application. Also, not later than February 15, 2006, and annually thereafter through 2011, a report on the progress made during the preceding year toward meeting these goals is to be supplied to appropriate congressional committees. Table 2 shows that, across the government, standard service for both initial investigations and reinvestigations for top secret clearances resulted in more than 1 year elapsing, on average, between submitting the investigation requests and closing the investigations. OPM does, however, permit agencies to request priority (expedited) processing on a limited number of investigations, and those investigations took less time to close. Table 2 also shows a difference in the time required to close initial investigations and reinvestigations for top secret clearances. In February and May 2004, we reported that different risks are associated with delays in completing initial investigations and reinvestigations. Delays in completing initial personnel security clearances can have negative impacts on the costs of performing classified work within or for the U.S. government. For example, delays in clearing industry personnel can affect the cost, timeliness, and quality of contractor performance on defense contracts. Conversely, delays in completing reinvestigations may lead to a heightened risk of national security breaches because the longer individuals hold clearances, the more likely they are to be working with critical information systems. Our prior review noted that delays in completing personnel security clearance investigations for DOD and other agencies have resulted, in part, from a shortage of investigative staff. In February 2004, we noted that the Deputy Associate Director of OPM’s Center for Investigations Services estimated that OPM and DOD would need a total of roughly 8,000 full-time- equivalent investigative personnel to eliminate backlogs and deliver investigations in a timely fashion to their customers. To reach its goal of 8,000, OPM must add and retain approximately 3,800 full-time equivalent investigative staff, and retain all of the estimated 4,200 full-time-equivalent staff that OPM and DOD had combined in December 2003. In our February 2004 report, we noted that OPM’s primary contractor was adding about 100 and losing about 70 investigators per month. If the high rate of turnover has continued, the ability to grow investigative capacity could be difficult. In addition, OPM could be left with a large number of investigative staff with limited experience. OPM’s Deputy Associate Director noted that the inexperience among investigative staff results in investigations not being completed as quickly as they might have been if the investigators were more experienced. The OPM official also noted that the quality of the investigations is not where she would like to see it. As we noted in our September 2004 testimony before this subcommittee, OPM had continued to use its investigations contractor to conduct personnel security clearance investigations on its own employees even though we raised an internal control concern about this practice during our 1996 review. OPM officials indicated that they plan to use the government employees that were transferred from DOD to address this concern. In addition to adding staff, two other initiatives should decrease delays in completing clearance investigations. A new DOD initiative—the phased periodic reinvestigation (phased PR)—that we discussed in our May 2004 report can make more staff available and thereby decrease the workload associated with some reinvestigations for top secret clearances. The phased approach to periodic reinvestigations involves conducting a reinvestigation in two phases; a more extensive reinvestigation would be conducted only if potential security issues were identified in the initial phase. Specifically, investigative staff would verify residency records and conduct interviews of listed references, references developed during the investigation, and individuals residing in the neighborhood only if potential security issues were identified in other parts of the standard reinvestigation process. The Defense Personnel Security Research Center showed that at least 20 percent of the normal investigative effort could be saved with almost no loss in identifying critical issues needed for adjudication. In December 2004, the President approved the use of the phased PR for personnel needing to renew their top secret clearances. Another source of investigative, as well as adjudicative, workload reduction may result from the recent reciprocity requirements contained in the Intelligence Reform and Terrorism Prevention Act of 2004. Our May 2004 report noted that the lack of reciprocity (the acceptance of clearance and access granted by another department, agency, or military service) was cited as an obstacle that can cause contractor delays in filling positions and starting work on government contracts. Under the new law, all security clearance background investigations and determinations completed by an authorized investigative agency or authorized adjudicative agency shall be accepted by all agencies. DOD’s Joint Personnel Adjudication System (JPAS) consolidated 10 DOD adjudication databases to provide OUSD(I) with better monitoring of adjudication-related problems, but a new law requires wider consolidation. Past delays in implementing DOD’s JPAS greatly inhibited OUSD(I)’s ability to monitor overdue reinvestigations and generate accurate estimates for that portion of the backlog. In addition to correcting these problems, implementation of much of JPAS has eliminated the need for DOD’s 10 adjudication facilities to maintain their own databases of adjudicative information. This consolidation may also assist with a requirement in the Intelligence Reform and Terrorism Prevention Act of 2004. Among other things, the law requires that not later than December 17, 2005, the Director of OPM shall, in cooperation with the heads of the certain other government entities, establish and commence operating and maintaining a single, integrated, secure database into which appropriate data relevant to the granting, denial, and revocation of a security clearance or access pertaining to military, civilian, or government contractor personnel shall be entered from all authorized investigative and adjudicative agencies. OPM officials stated that JPAS and OPM’s Clearance Verification System account for over 90 percent of the government’s active security clearances and that the remaining clearances are primarily housed in classified record systems (e.g., the Central Intelligence Agency’s Scattered Castles) devoted to the intelligence community. Additionally, DOD may move closer toward the 9/11 Commission’s recommendation of having a single government agency responsible for providing and maintaining clearances by co-locating its 10 adjudication facilities on a single military installation. The recent base realignment and closure list includes a recommendation to co-locate all of DOD’s adjudication facilities. While co-location—if it occurs—would not be the same as consolidation, it might provide opportunities for greater communication within DOD. However, the proposed co-location at Fort Meade, Maryland, could also result in the loss of trained staff who might choose not to relocate, such as some of the roughly 400 employees in the Defense Industrial Security Clearance Office and the Defense Office of Hearings and Appeals Personal Security Division in Columbus, Ohio. In our February 2004 report, we noted that DOD had (1) as of September 30, 2003, a backlog of roughly 90,000 completed investigations that had not been adjudicated within prescribed time limits, (2) no DOD-wide standard for determining how quickly adjudications should be completed, and (3) inadequate adjudicator staffing. Also at the time of our report, the DOD Office of Inspector General was examining whether the Navy adjudicative contracts led to contractors’ staff performing an inherently governmental function—adjudication. Because of that examination, it was unclear whether the Army and Air Force adjudication facilities would be able to use similar contracting to eliminate their backlogs. Although DOD concurred with our April 2001 recommendations for improving its adjudicative process, it has not fully implemented any of the recommendations as of May 2005. OUSD(I) reported the following progress for those four recommendations. (Our recommendations appear in italics, followed by a summary of DOD’s response and/or actions.) Establish detailed documentation requirements to support adjudication decisions. Use of JPAS will require greater documentation on adverse information and possible factors to mitigate that information, but this feature of JPAS has not been fully implemented. Require that all DOD adjudicators use common explanatory guidance. DOD has developed this guidance and is awaiting review by the Personnel Security Working Group of Policy Coordinating Committee for Records Access and Information Security Policy, an interagency group. Establish common adjudicator training requirements and develop appropriate continuing education opportunities for all DOD adjudicators. A work plan has been developed to establish an adjudicator certification process, to be implemented in late 2005 or early 2006. The plan will include continuing education requirements. Establish a common quality assurance program to be implemented by officials in all DOD adjudication facilities and monitor compliance through annual reporting. OUSD(I) indicates DOD is developing criteria and a form to assess the quality of the investigations that DOD is receiving. Also, in the future, cases are to be randomly selected from JPAS and reviewed by a team of adjudicators from the various adjudication facilities. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. In summary, Mr. Chairman, we will continue to monitor this area as we do for all of the high-risk programs on our list. Much remains to be done to bring lasting solutions to this high-risk area. As we stated in our report, High-Risk Series: An Update, perseverance by the administration in implementing GAO’s recommended solutions and continued oversight and action by the Congress are both essential. Individuals making key contributions to this statement include Alissa H. Czyz, Jack E. Edwards, Julia C. Matta, and Mark A. Pross. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Security Clearances: FBI Has Enhanced Its Process for State and Local Law Enforcement Officials. GAO-04-596. Washington, D.C.: April 30, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Security Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigation Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 30, 1998. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Personnel Security Investigations. GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: August 12, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. DOD Special Access Programs: Administrative Due Process Not Provided When Access Is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Threats to national security--such as the September 11, 2001, terrorist attacks and high-profile espionage cases--underscore the need for timely, high-quality determinations of who is eligible for a personnel security clearance which allows an individual to access classified information. The Department of Defense (DOD) needs an effective and efficient clearance program because it is responsible for about 2 million active clearances and provides clearances to more than 20 other executive agencies as well as the legislative branch. Despite these imperatives, DOD has for more than a decade experienced delays in completing hundreds of thousands of clearance requests and impediments to accurately estimating and eliminating its clearance backlog. In January 2005, GAO designated DOD's personnel security clearance program as a high-risk area. In February 2005, DOD transferred its personnel security investigative functions and about 1,800 positions to the Office of Personnel Management (OPM), after 2 years of negotiation between the agencies. This testimony provides an update on the challenges that led to GAO's high-risk designation. It identifies both the positive steps that have been taken to address previously identified challenges and some of the remaining hurdles. GAO will continue to monitor this area. While DOD has taken steps to address the problems that led to designating its clearance program as high risk, continuing challenges are found in each of the three stages of DOD's personnel security clearance process. Preinvestigation: To address previously identified problems in projecting clearance workload, DOD is identifying the military and civilian positions that require clearances. Identifying clearance requirements for contractor personnel is still in the planning phase. Another problem is the efficient submission of investigation requests. In the 2 years since DOD and OPM announced the transfer of DOD's investigative functions and personnel to OPM, the two agencies did not ensure the seamless submission of DOD requests to OPM. DOD is developing software to remedy this problem. Investigation: Delays in completing investigations are continuing. For February 2005, OPM--which now supplies an estimated 90 percent of the government's clearance investigations--reported that over 185,000 of its clearance investigations had exceeded timeliness goals. OPM's effort to add investigative staff is a positive step, but adding thousands of staff could result in continued timeliness problems and quality concerns as the staff gain experience. OPM's workload should decrease because of two recent initiatives: (1) eliminating a few of the investigative requirements for some reinvestigations of personnel updating their clearances and (2) requiring the acceptance of clearances and access granted to personnel moving from one agency to another. Adjudication: In the past, DOD had difficulty monitoring who had been adjudicated for clearances and when the clearances needed to be renewed. While the Joint Personnel Adjudication System has combined databases from DOD's 10 adjudicative facilities to enhance monitoring, wider consolidation of government databases may be required. The Director of OPM will need to integrate all federal agencies into a single governmentwide database in order to meet a requirement established in a recent law. As of September 30, 2003, DOD had a backlog of roughly 90,000 adjudications. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Critical infrastructures are physical or virtual systems and assets so vital to the nation that their incapacitation or destruction would have a debilitating impact on national and economic security and on public health and safety. These systems and assets—such as the electric power grid, chemical plants, and water treatment facilities—are essential to the operations of the economy and the government. Recent terrorist attacks and threats have underscored the need to protect our nation’s critical infrastructures. If vulnerabilities in these infrastructures are exploited, our nation’s critical infrastructures could be disrupted or disabled, possibly causing loss of life, physical damage, and economic losses. Although the vast majority of our nation’s critical infrastructures are owned by the private sector, the federal government owns and operates key facilities that use control systems, including oil, gas, water, energy, and nuclear facilities. Control systems are computer-based systems that are used within many infrastructures and industries to monitor and control sensitive processes and physical functions. Typically, control systems collect sensor measurements and operational data from the field, process and display this information, and relay control commands to local or remote equipment. Control systems perform functions that range from simple to complex. They can be used to simply monitor processes—for example, the environmental conditions in a small office building—or to manage the complex activities of a municipal water system or a nuclear power plant. In the electric power industry, control systems can be used to manage and control the generation, transmission, and distribution of electric power. For example, control systems can open and close circuit breakers and set thresholds for preventive shutdowns. The oil and gas industry uses integrated control systems to manage refining operations at plant sites, remotely monitor the pressure and flow of gas pipelines, and control the flow and pathways of gas transmission. Water utilities can remotely monitor well levels and control the wells’ pumps; monitor flows, tank levels, or pressure in storage tanks; monitor water quality characteristics such as pH, turbidity, and chlorine residual; and control the addition of chemicals to the water. Installing and maintaining control systems requires a substantial financial investment. DOE cites research estimating the value of the control systems used to monitor and control the electric grid and the oil and natural gas infrastructure at $3 billion to $4 billion. The thousands of remote field devices represent an additional investment of $1.5 billion to $2.5 billion. Each year, the energy sector alone spends over $200 million for control systems, networks, equipment, and related components and at least that amount in personnel costs. There are two primary types of control systems: distributed control systems and supervisory control and data acquisition (SCADA) systems. Distributed control systems typically are used within a single processing or generating plant or over a small geographic area, while SCADA systems typically are used for large, geographically dispersed operations. For example, a utility company may use a distributed control system to manage power generation and a SCADA system to manage its distribution. A SCADA system is generally composed of six components: (1) instruments, which sense conditions such as pH, temperature, pressure, power level, and flow rate; (2) operating equipment, which includes pumps, valves, conveyors, and substation breakers; (3) local processors, which communicate with the site’s instruments and operating equipment, collect instrument data, and identify alarm conditions; (4) short-range communication, which carry analog and discrete signals between the local processors and the instruments and operating equipment; (5) host computers, where a human operator can supervise the process, receive alarms, review data, and exercise control; and (6) long-range communications, which connect local processors and host computers using, for example, leased phone lines, satellite, and cellular packet data. Several key federal plans focus on securing critical infrastructure control systems. The National Strategy to Secure Cyberspace calls for DHS and DOE to work in partnership with industry to develop best practices and new technology to increase the security of critical infrastructure control systems, to determine the most critical control systems-related sites, and to develop a prioritized plan for short-term cyber security improvements for those sites. In addition, DHS’s National Infrastructure Protection Plan specifically identifies control systems as part of the cyber infrastructure, establishes an objective of reducing vulnerabilities and minimizing the severity of attacks on these systems, and identifies programs directed at protecting control systems. Further, in May 2007, the critical infrastructure sectors issued sector-specific plans to supplement the National Infrastructure Protection Plan. Twelve sectors, including the chemical, energy, water, information technology, postal, emergency services, and telecommunications sectors, identified control systems within their respective sectors. Of these, most identified control systems as critical to their sector and listed efforts under way to help secure them. Cyber threats can be intentional and unintentional, targeted or nontargeted, and can come from a variety of sources. Intentional threats include both targeted and nontargeted attacks, while unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. A targeted attack is when a group or individual specifically attacks a critical infrastructure system and a nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or malware is released on the Internet with no specific target. There is increasing concern among both government officials and industry experts regarding the potential for a cyber attack on a national critical infrastructure, including the infrastructure’s control systems. The Federal Bureau of Investigation has identified multiple sources of threats to our nation’s critical infrastructures, including foreign nation states engaged in information warfare, domestic criminals, hackers, and virus writers, and disgruntled employees working within an organization. Control systems are vulnerable to flaws or weaknesses in system security procedures, design, implementation, and internal controls. When these weaknesses are accidentally triggered or intentionally exploited, they could result in a security breach. Vulnerabilities could occur in control systems’ policies, platform (including hardware, operating systems, and control system applications), or networks. Federal and industry experts believe that critical infrastructure control systems are more vulnerable today than in the past due to the increased standardization of technologies, the increased connectivity of control systems to other computer networks and the Internet, insecure connections, and the widespread availability of technical information about control systems. Further, it is not uncommon for control systems to be configured with remote access through either a dial-up modem or over the Internet to allow remote maintenance or around-the-clock monitoring. If control systems are not properly secured, individuals and organizations may eavesdrop on or interfere with these operations from remote locations. Reported attacks and unintentional incidents involving critical infrastructure control systems demonstrate that a serious attack could be devastating. Although there is not a comprehensive source for incident reporting, the following examples, reported in government and media sources, demonstrate the potential impact of an attack. Bellingham, Washington, gasoline pipeline failure. In June 1999, 237,000 gallons of gasoline leaked from a 16-inch pipeline and ignited an hour and a half later, causing three deaths, eight injuries, and extensive property damage. The pipeline failure was exacerbated by poorly performing control systems that limited the ability of the pipeline controllers to see and react to the situation. Maroochy Shire sewage spill. In the spring of 2000, a former employee of an Australian software manufacturing organization applied for a job with the local government, but was rejected. Over a 2-month period, this individual reportedly used a radio transmitter on as many as 46 occasions to remotely break into the controls of a sewage treatment system. He altered electronic data for particular sewerage pumping stations and caused malfunctions in their operations, ultimately releasing about 264,000 gallons of raw sewage into nearby rivers and parks. CSX train signaling system. In August 2003, the Sobig computer virus shut down train signaling systems throughout the East Coast of the United States. The virus infected the computer system at CSX Corporation’s Jacksonville, Florida, headquarters, shutting down signaling, dispatching, and other systems. According to an Amtrak spokesman, 10 Amtrak trains were affected. Train service was either shut down or delayed up to 6 hours. Los Angeles traffic lights. According to several published reports, in August 2006, two Los Angeles city employees hacked into computers controlling the city’s traffic lights and disrupted signal lights at four intersections, causing substantial backups and delays. The attacks were launched prior to an anticipated labor protest by the employees. Harrisburg, Pennsylvania, water system. In October 2006, a foreign hacker penetrated security at a water filtering plant. The intruder planted malicious software that was capable of affecting the plant’s water treatment operations. The infection occurred through the Internet and did not seem to be a direct attack on the control system. Browns Ferry power plant. In August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. As control systems become increasingly interconnected with other networks and the Internet, and as the system capabilities continue to increase, so do the threats, potential vulnerabilities, types of attacks, and consequences of compromising these critical systems. Industry-specific organizations in various sectors, including the electricity, oil and gas, and water sectors, have initiatives under way to help improve control system security, including developing standards and publishing guidance. Our report being released today provides a detailed list of industry initiatives; several of these initiatives are described below. Electricity. In 2007, the North American Electric Reliability Corporation began implementing cyber security reliability standards that apply to control systems and the Institute of Electrical and Electronics Engineers has several standards working groups addressing issues related to control systems security in the industry. Oil and gas. The American Gas Association supported development of a report that would recommend how to apply encryption to protect gas utility control systems; and, over the past three years, the American Petroleum Institute has published two standards related to pipeline control systems integrity and security and the design and implementation of control systems displays. Water. The water sector includes about 150,000 water, wastewater, and storm water organizations at all levels of government and has worked with the Environmental Protection Agency on development of the Water Sector-Specific Plan, which includes some efforts on control systems security. In addition, the Awwa Research Foundation is currently working on two research projects related to the cyber security of water utility SCADA systems. Over the past few years, federal agencies— including DHS, DOE, and others—have initiated efforts to improve the security of critical infrastructure control systems. For example, DHS is sponsoring multiple control systems security initiatives, including the Control System Cyber Security Self Assessment Tool, an effort to improve control systems’ cyber security using vulnerability evaluation and response tools, and the Process Control System Forum, to build relationships with control systems’ vendors and infrastructure asset owners. Additionally, DOE sponsors control systems security efforts within the electric, oil, and natural gas industries. These efforts include the National SCADA Test Bed Program, which funds testing, assessments, and training in control systems security, and the development of a road map for securing control systems in the energy sector. Our report being released today provides a more detailed list of initiatives being led by federal agencies. DHS, however, has not yet established a strategy to coordinate the various control systems activities across federal agencies and the private sector. In 2004, we recommended that DHS develop and implement a strategy for coordinating control systems security efforts among government agencies and the private sector. DHS agreed and issued a strategy that focused primarily on DHS’s initiatives. The strategy does not include ongoing work by DOE, the Federal Energy Regulatory Commission, NIST, and others. Further, it does not include the various agencies’ responsibilities, goals, milestones, or performance measures. Until DHS develops an overarching strategy that delineates various public and private entities’ roles and responsibilities and uses it to guide and coordinate control systems security activities, the federal government and private sector risk investing in duplicative activities and missing opportunities to learn from other organizations’ activities. Further, DHS is responsible for sharing information with critical infrastructure owners on control systems vulnerabilities, but lacks a rapid, efficient process for disseminating sensitive information to private industry owners and operators of critical infrastructures. An agency official noted that sharing information with the private sector can be slowed by staff turnover and vacancies at DHS, the need to brief agency and executive branch officials and congressional staff before briefing the private sector, and difficulties in determining the appropriate classification level for the information. Until the agency establishes an approach for rapidly assessing the sensitivity of vulnerability information and disseminating it—and thereby demonstrates the value it can provide to critical infrastructure owners—DHS’s ability to effectively serve as a focal point in the collection and dissemination of sensitive vulnerability information will continue to be limited. Without a trusted focal point for sharing sensitive information on vulnerabilities, there is an increased risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. Control systems are an essential component of our nation’s critical infrastructure and their disruption could have a significant impact on public health and safety. Given the importance of control systems, in our report being released today, we are recommending that the Secretary of the Department of Homeland Security implement the following two actions: develop a strategy to guide efforts for securing control systems, including agencies’ responsibilities, as well as overall goals, milestones, and performance measures and establish a rapid and secure process for sharing sensitive control system vulnerability information with critical infrastructure control system stakeholders, including vendors, owners, and operators. In its comments on our report, DHS neither agreed nor disagreed with these recommendations, but stated that it would take them under advisement. The agency also discussed new initiatives to develop plans and processes that are consistent with our recommendations. In summary, past incidents involving control systems, system vulnerabilities, and growing threats from a wide variety of sources highlight the risks facing control systems. The public and private sectors have begun numerous activities to improve the cyber security of control systems. However, the federal government lacks an overall strategy for coordinating public and private sector efforts. DHS also lacks an efficient process for sharing sensitive information on vulnerabilities with private sector critical infrastructure owners. Until DHS completes the comprehensive strategy, the public and private sectors risk undertaking duplicative efforts. Further, without a streamlined process for advising private sector infrastructure owners of vulnerabilities, DHS is unable to fulfill its responsibility as a focal point for disseminating this information. If key vulnerability information is not in the hands of those who can mitigate its potentially severe consequences, there is an increased risk that attacks on control systems could cause a significant disruption to our nation’s critical infrastructures. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-6244, or by e-mail at [email protected]. Other key contributors to this testimony include Scott Borre, Heather A. Collins, Neil J. Doherty, Vijay D’Souza, Nancy Glover, Sairah Ijaz, Patrick Morton, and Colleen M. Phillips (Assistant Director). This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Control systems--computer-based systems that monitor and control sensitive processes--perform vital functions in many of our nation's critical infrastructures such as electric power generation, transmission, and distribution; oil and gas refining; and water treatment and distribution. The disruption of control systems could have a significant impact on public health and safety, which makes securing them a national priority. GAO was asked to testify on portions of its report on control systems security being released today. This testimony summarizes the cyber threats, vulnerabilities, and the potential impact of attacks on control systems; identifies private sector initiatives; and assesses the adequacy of public sector initiatives to strengthen the cyber security of control systems. To address these objectives, GAO met with federal and private sector officials to identify risks, initiatives, and challenges. GAO also compared agency plans to best practices for securing critical infrastructures. Critical infrastructure control systems face increasing risks due to cyber threats, system vulnerabilities, and the serious potential impact of attacks as demonstrated by reported incidents. Threats can be intentional or unintentional, targeted or nontargeted, and can come from a variety of sources. Control systems are more vulnerable to cyber attacks than in the past for several reasons, including their increased connectivity to other systems and the Internet. Further, as demonstrated by past attacks and incidents involving control systems, the impact on a critical infrastructure could be substantial. For example, in 2006, a foreign hacker was reported to have planted malicious software capable of affecting a water filtering plant's water treatment operations. Also in 2006, excessive traffic on a nuclear power plant's control system network caused two circulation pumps to fail, forcing the unit to be shut down manually. Multiple private sector entities such as trade associations and standards setting organizations are working to help secure control systems. Their efforts include developing standards and providing guidance to members. For example, the electricity industry has recently developed standards for cyber security of control systems and a gas trade association is developing guidance for members to use encryption to secure control systems. Federal agencies also have multiple initiatives under way to help secure critical infrastructure control systems, but more remains to be done to coordinate these efforts and to address specific shortfalls. Over the past few years, federal agencies have initiated efforts to improve the security of critical infrastructure control systems. However, there is as yet no overall strategy to coordinate the various activities across federal agencies and the private sector. Further, the Department of Homeland Security (DHS) lacks processes needed to address specific weaknesses in sharing information on control system vulnerabilities. Until public and private sector security efforts are coordinated by an overarching strategy, there is an increased risk that multiple organizations will conduct duplicative work. In addition, until information-sharing weaknesses are addressed, DHS risks not being able to effectively carry out its responsibility for sharing information on vulnerabilities with the private and public sectors. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The national park system has 376 units. These park units have over 16,000 permanent structures, 8,000 miles of roads, 1,500 bridges and tunnels, 5,000 housing units, about 1,500 water and waste systems, 200 radio systems, over 400 dams, and more than 200 solid waste operations. According to the Park Service, these facilities are valued at over $35 billion. Needless to say, the proper care and maintenance of the national parks and their supporting infrastructure is essential to the continued use and enjoyment of our great national treasures by this and future generations. However, for years Park Service officials have highlighted the agency’s inability to keep up with its maintenance needs. In this connection, Park Service officials and others have often cited a continuing buildup of unmet maintenance needs as evidence of deteriorating conditions throughout the national park system. The accumulation of these unmet needs has become commonly referred to by the Park Service as its “maintenance backlog.” The reported maintenance backlog has increased significantly over the past 10 years—from $1.9 billion in 1987 to about $6.1 billion in 1997. Recently, concerns about the maintenance backlog situation within the National Park Service, as well as other federal land management agencies, have led the Congress to provide significant new sources of funding. These additional sources of funding were, in part, aimed at helping the agencies address their maintenance problems. It is anticipated that new revenues from the 3-year demonstration fee program will provide the Park Service over $100 million annually. In some cases, the new revenues will as much as double the amount of money available for operating individual park units. In addition, funds from a special one-time appropriation from the Land and Water Conservation Fund may also be available for use by the Park Service in addressing the maintenance backlog. These new revenue sources are in addition to the $300 million in annual operating appropriations which are used for maintenance activities within the agency. In 1997, in support of its fiscal year 1998 budget request, the Park Service estimated that its maintenance backlog was about $6.1 billion.Maintenance is generally considered to be work done to keep assets—property, plant, and equipment—in acceptable condition. It includes normal repairs and the replacement of parts and structural components needed to preserve assets. However, the composition of the maintenance backlog estimate provided by the Park Service includes activities that go beyond what could be considered maintenance. Specifically, the Park Service’s estimate of its maintenance backlog includes not only repair and rehabilitation projects to maintain existing facilities, but also projects for the construction of new facilities. Of the estimated $6.1 billion maintenance backlog, most of it—about $5.6 billion, or about 92 percent—are construction projects. These projects, such as building roads and utility systems, are relatively large and normally exceed $500,000 and involve multiyear planning and construction activities. According to the Park Service, the projects are intended to meet the following objectives: (1) repair and rehabilitation; (2) resource protection issues, such as constructing or rehabilitating historic structures and trails and erosion protection activities; (3) health and safety issues, such as upgrading water and sewer systems; (4) new facilities in older existing parks; and (5) new facilities in new and developing parks. Appendix I of this testimony shows the dollar amounts and percentage of funds pertaining to each of the project objectives. The Park Service’s list of projects in the construction portion of the maintenance backlog reveals that over 21 percent, or $1.2 billion, of the $5.6 billion is for new facilities. We visited four parks to review the projects listed in the Park Service’s maintenance backlog estimates for those parks and found that the estimates included new construction projects as part of the backlog estimate. For example: Acadia National Park’s estimate included $16.6 million to replace a visitor center and construct a park entrance. Colonial National Historical Park included $24 million to build a Colonial Parkway bicycle and walking trail. Delaware Water Gap National Recreation Area included $19.2 million to build a visitor center and rehabilitate facilities. Rocky Mountain National Park included $2.4 million to upgrade entrance facilities. While we do not question the need for any of these facilities, they are directed at either further development of a park or modifications of and improvements to existing facilities in parks to meet the visions that park managers wish to achieve for their parks. These projects are not aimed at addressing the maintenance of existing facilities within the parks. As a result, including these types of projects in the maintenance backlog contributes to confusion about the actual maintenance needs of the national park system. In addition to projects clearly listed as new construction, other projects on the $5.6 billion list that are not identified as new construction, such as repair and rehabilitation of existing facilities, also include substantial amounts of new construction. Our review of the projects for the four parks shows that each included large repair and rehabilitation projects that contained tasks that would not be considered maintenance. These projects include new construction for adding, expanding, and upgrading facilities. For example, at Colonial National Historical Park, an $18 million project to protect Jamestown Island and other locations from erosion included about $4.7 million primarily for new construction of such items as buildings, boardwalks, wayside exhibits, and an audio exhibit. Beyond construction items, the remaining composition of the $6.1 billion backlog estimate—about 8 percent, or about $500 million—consists of smaller maintenance projects that include such items as rehabilitating campgrounds and trails and repairing bridges, and other items that recur on a cyclic basis, such as reroofing or repainting buildings. Excluded from the Park Service’s maintenance backlog figures is the daily, park-based operational maintenance to meet routine park needs, such as janitorial and custodial services, groundskeeping, and minor repairs. The Park Service compiles its maintenance backlog estimates on an ad hoc basis in response to requests from the Congress or others; it does not have a routine, systematic process for determining its maintenance backlog. The January estimate of the maintenance backlog—its most recent estimate—was based largely on information that was compiled over 4 years ago. This fact, as well as the absence of a common definition of what should be included in the maintenance backlog, contributed to an inaccurate and out-of-date estimate. Although documentation showing the maintenance backlog estimate of $6.1 billion was dated January 1997, for the most part, the Park Service’s data were compiled on the basis of information received from the individual parks in December 1993. A Park Service official stated that the 1993 data were updated by headquarters to reflect projects that had been subsequently funded during the intervening years. However, at each of the parks we visited in preparing for today’s testimony, we found that the Park Service’s most recent maintenance backlog estimate for each of the parks was neither accurate nor current. The four parks’ estimates of their maintenance backlog needs ranged from about $40 million at Rocky Mountain National Park to $120 million at Delaware Water Gap National Recreation Area. Our analysis of these estimates showed that they varied from the headquarters estimates by about $3 million and $21 million, respectively. The differences occurred because the headquarters estimates were based primarily on 4-year old data. According to officials from the four parks, they were not asked to provide specific updated data to develop the 1997 backlog estimate. The parks’ estimates, based on more current information, included such things as updated lists reflecting more recent projects, modified scopes, and more up-to-date cost estimates. For example, Acadia’s estimate to replace the visitor center and construct a park entrance has been reduced from $16.6 million to $11.6 million; the Delaware Water Gap’s estimate of $19.2 million to build a visitor center and rehabilitate facilities has been reduced to $8 million; and Rocky Mountain’s $2.4 million project to upgrade an entrance facility is no longer a funding need because it is being paid for through private means. In addition, one of the projects on the headquarters list had been completed. The Park Service has no common definition as to what items should be included in an estimate of the maintenance backlog. As a result, we found that officials we spoke to in Park Service headquarters, two regional offices, and four parks had different interpretations of what should be included in the backlog. In estimating the maintenance backlog, some of these officials would exclude new construction; some would include routine, park-based maintenance; and some would include natural and cultural resource management and land acquisition activities. In addition, when the Park Service headquarters developed the maintenance backlog estimate, it included both new construction and maintenance-type items in the estimate. For example, nonmaintenance items, such as adding a bike path to a park where none now exists or building a new visitor center, are included. The net result is that the maintenance backlog estimate is not a reliable measure of the maintenance needs of the national park system. In order to begin addressing its maintenance backlog, the Park Service needs (1) accurate estimates of its total maintenance backlog and (2) a means for tracking progress so that it can determine the extent to which its needs are being met. Currently, the agency has neither of these things. Yet, the need for them is more important now than ever before because in fiscal year 1998, over $100 million in additional funding is being made available for the Park Service that it could use to address its maintenance needs. This additional funding comes from the demonstration fee program. Also, although the exact amount is not yet known, additional funding may be made available from the Land and Water Conservation Fund. Park Service officials told us that they have not developed a precise estimate of the total maintenance backlog because the needs far exceed the funding resources available to address them. In their view, the limited funds available to address the agency’s maintenance backlog dictate that managers focus their attention on identifying only the highest priority projects on a year-to-year basis. Since the agency does not focus on the total needs but only on priorities for a particular year, it cannot determine whether the maintenance conditions of park facilities are improving or worsening. Furthermore, without information on the total maintenance backlog, it is difficult to fully measure what progress is being made with available resources. The recent actions by the Congress to provide the Park Service with substantial additional funding, which could be used to address its maintenance backlog, further underscores the need to ensure that available funds are being used to address those needs and to show progress in improving the conditions of the national park system. The Park Service estimates that the demonstration fee program could provide over $100 million a year to address the parks’ maintenance and other operational needs. In some parks, revenue from new and increased fees could as much as double the amount of money that has been previously available for operating individual park units. In addition to the demonstration fee program, fiscal year 1998 was the first year that appropriations from the Land and Water Conservation Fund could be used to address the maintenance needs of the national park system. However, according to Park Service officials, the exact amount provided from this fund for maintenance will not be determined until sometime later this month. Two new requirements that have been imposed on the Park Service, and other federal agencies, should, if implemented properly, help the agency to better address its maintenance backlog. These new requirements involve (1) changes in federal accounting standards and (2) the Government Performance and Results Act (the Results Act). Recent changes in federal accounting standards require federal agencies, including the Park Service, to develop better data on their maintenance needs. The standards define deferred maintenance and require that it be disclosed in agencies’ financial statements beginning with fiscal year 1998. To implement these standards, the Park Service is part of a facilities maintenance study team that has been established within the Department of the Interior to provide the agency with deferred maintenance information as well as guidance on standard definitions and methodologies for improving the ongoing accumulation of this information. In addition, as part of this initiative, the Park Service is doing an assessment of its assets to show whether they are in poor, fair, or good condition. This condition information is essential to providing the Park Service with better data on its overall maintenance needs. Furthermore, it is important to point out that as part of the agency’s financial statements, the accuracy of the Park Service’s deferred maintenance estimates will be subjected to annual audits. This audit scrutiny is particularly important given the long-standing concerns reported by us and others about the validity of the data on the Park Service’s maintenance backlog estimates. The Results Act should also help the Park Service to better address its maintenance backlog. In carrying out the Results Act, the Park Service is requiring its park managers to measure progress in meeting a number of key goals, including whether and to what degree the conditions of park facilities are being improved. If properly implemented, this requirement should make the Park Service as a whole, as well as individual park managers, more accountable for how it spends maintenance funds to improve the condition of park facilities. Once in place, this process should permit the Park Service to better demonstrate what is being accomplished with its funding resources. This is an important step in the right direction since our past work has shown that the Park Service could not hold park managers accountable for their spending decisions because they did not have a good system for tracking progress and measuring results. Mr. Chairman, this completes my statement. I would be happy to answer questions from you or any other Members of the Subcommittee. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed: (1) the Park Service's estimate of the maintenance backlog and its composition; (2) how the agency determined the maintenance backlog estimate and whether it is reliable; and (3) how the agency manages the backlog. GAO noted that: (1) the Park Service's estimate of its maintenance backlog does not accurately reflect the scope of the maintenance needs of the park system; (2) the Park Service estimated, as of January 1997, that its maintenance backlog was about $6.1 billion; (3) most of this amount--about $5.6 billion, or about 92 percent--was construction projects; (4) of this amount, over 21 percent or $1.2 billion was for the construction of new facilities; (5) while GAO does not question the need for these facilities, including these kinds of new construction projects or projects that expand or upgrade park facilities in an estimate of the maintenance backlog is not appropriate because it goes beyond what could reasonably be viewed as maintenance; (6) as a result, including these projects in the maintenance backlog contributes to confusion about the actual maintenance needs of the national park system; (7) the Park Service's estimate of its maintenance backlog is not reliable; (8) its maintenance backlog estimates are compiled on an ad hoc basis in response to requests from Congress or others; (9) the agency does not have a routine, systematic process for determining its maintenance backlog; (10) the most recent estimate, as of January 1997, was based largely on information that was compiled by the Park Service over 4 years ago and has not been updated to reflect changing conditions in individual park units; (11) this fact, as well as the absence of a common definition of what should be included in the maintenance backlog, contributes to an inaccurate and out-of-date estimate; (12) the Park Service does not use the estimated backlog in managing park maintenance operations; (13) as such, it has not specifically identified its total maintenance backlog; (14) since the backlog far exceeds the funding resources being made available to address it, the Park Service has focused its efforts on identifying the highest-priority maintenance needs; (15) however, given that substantial additional funding resources can be used to address maintenance--over $100 million starting in fiscal year (FY) 1998--the Park Service should more accurately determine its total maintenance needs and track progress in meeting them so that it can determine the extent to which they are being met; (16) the Park Service is beginning to implement the legislatively mandated management changes in FY 1998; and (17) these changes could, if properly implemented, help the Park Service develop more accurate data on its maintenance backlog and track progress in addressing it. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As we have previously reported, DOD began the F-35 acquisition program in October 2001 without adequate knowledge about the aircraft’s critical technologies or design. In addition, DOD’s acquisition strategy called for high levels of concurrency or overlap among development, testing, and production. In our prior work, we have identified the lack of adequate knowledge and high levels of concurrency as major drivers of the significant cost and schedule growth as well as performance shortfalls that the program has experienced since 2001. The program has been restructured three times since it began: first in December 2003, again in March 2007, and most recently in March 2012. The most recent restructuring was initiated in early 2010 when the program’s unit cost estimates exceeded critical thresholds established by statute—a condition known as a Nunn-McCurdy breach. DOD subsequently certified to Congress in June 2010 that the program was essential to national security and needed to continue. DOD then began efforts to significantly restructure the program and establish a new acquisition program baseline. These restructuring efforts continued through 2011 and into 2012, during which time the department increased the program’s cost estimates and extended its testing and delivery schedules. Since then costs have remained relatively stable. Table 1 shows the cost, quantity, and schedule changes from the initial program baseline and the relative stability since the new baseline was established. As the program has been restructured, DOD has also reduced near-term aircraft procurement quantities. From 2001 and through 2007, DOD deferred the procurement of 931 aircraft into the future, and then again from 2007 and through 2012, DOD deferred another 450 aircraft. Figure 1 shows how planned quantities in the near term steadily declined over time. The F-35 is DOD’s most costly acquisition program, and over the last several years we have reported on the affordability challenges facing the program. As we reported in April 2016, the estimated total acquisition cost for the F-35 program was $379 billion, and the program would require an average of $12 billion per year from 2016 through 2038. The program expects to reach peak production rates for U.S. aircraft in 2022, at which point DOD expects to spend more than $14 billion a year on average for a decade (see fig. 2). Given these significant acquisition costs, we found that DOD would likely face affordability challenges as the F-35 program competes with other large acquisition programs, including the B-21 bomber, KC-46A tanker, and Ohio Class submarine replacement. In addition, in September 2014, we reported that DOD’s F-35 sustainment strategy may not be affordable. Through 2016, DOD had awarded contracts for production of 9 lots of F- 35 aircraft, totaling 285 aircraft (217 aircraft for the U.S. and 68 aircraft for international partners or foreign military sales). At the time of this report, the contract for lot 10 had not been signed. In 2013, the Departments of the Navy and the Air Force issued a joint report to the congressional defense committees providing that the Marine Corps and Air Force would field initial operating capabilities in 2015 and 2016, respectively, with aircraft that had limited warfighting capabilities. The Navy did not plan to field its initial operating capability until 2018, after the F-35’s full warfighting capabilities had been developed and tested. These dates represented a delay of 5 to 6 years from the program’s initial baseline. As planned, the Marine Corps and Air Force declared initial operational capability (IOC) in July 2015 and August 2016, respectively. DOD will need more time and money than expected to complete the remaining 10 percent of the F-35 development program. DOD has experienced delays in testing the software and systems that provide warfighting capabilities, known as mission systems, largely because the software has been delivered late to be tested and once delivered has not worked as expected. Program officials have had to regularly divert resources from developing and testing of more advanced software capabilities to address unanticipated problems with prior software versions. These problems have compounded over time, and this past year was no exception. DOD began testing the final block of software— known as block 3F—later than expected, experienced unanticipated problems with the software’s performance, and thus did not complete all mission systems testing it had planned for 2016. As a result, the F-35 program office has noted that more time and money will be needed to complete development. The amount of time and money could vary significantly depending on the program’s ability to complete developmental and operational testing. We estimate that developmental testing could be delayed as much as 12 months, thus delaying the start of initial operational testing, and total development costs could increase by nearly $1.7 billion. In addition, the Navy’s IOC and the program’s full-rate production decision could also be delayed. DOD continues to experience delays in F-35 mission systems testing. Although mission systems testing is about 80 percent complete, the complexity of developing and testing mission systems has been troublesome. For the F-35 program, DOD is developing and fielding mission systems capabilities in software blocks: (1) Block 1, (2) Block 2A, (3) Block 2B, (4) Block 3i, and (5) Block 3F. Each subsequent block builds on the capabilities of the preceding block. Over the last few years, program officials have had to divert resources—personnel and infrastructure—from developing and testing of more advanced software blocks to address unanticipated problems with prior software blocks. Over time, this practice has resulted in compounding delays in mission systems testing. Blocks 1 through 3i are now complete, and the program is currently focused on developing and testing Block 3F, the final software block in the current development program. Figure 3 illustrates the mission systems software blocks being developed for the program, the percentage of test points completed by block, and the build-up to full warfighting capability with Block 3F. Program officials spent some of 2016 addressing problems with Block 3i mission systems unexpectedly shutting down and restarting—an issue known as instability—which delayed Block 3F testing. In early 2016, officials were developing and testing Block 3i concurrently with Block 3F. In order to ensure that the Block 3i instability was addressed in time for the Air Force’s planned IOC in August 2016, officials diverted resources from Block 3F.That decision delayed subsequent testing that had been planned for Block 3F. Further delays resulted from the discovery of instability and functionality problems with Block 3F. To mitigate some schedule delays, program officials implemented a new process to introduce software updates quicker than normal. Although the quick software releases helped to ensure that testing continued, the final planned version of Block 3F, which was originally planned to be released to testing in February 2016, was not released until late November 2016, nearly a 10-month delay. As a result, program officials have identified the need for additional time to complete development. Program officials now project that developmental testing, which was expected to be completed in May, will conclude in October 2017, 5 months later than planned. However, based on our analysis, the program’s projection is optimistic as it does not reflect historical F-35 test data. Program officials believe that going forward they may be able to devote more resources to mission systems testing, which could lead to higher test point completion rates than they have achieved in the past. According to GAO best practices, credible schedule estimates are rooted in historical data. As of November 2016, program officials estimated that the program will need to complete as much as an average of 384 mission systems test points per month in order to finish flight testing by October 2017—a rate that the program has rarely achieved before. Our analysis of historical test point data as of December 2016 indicates that the average test point execution rates are much lower, at 220 mission systems test points per month. In addition, historical averages suggest that test point growth—additions to the overall test points from discovery in flight testing—is much higher than program officials assume, while estimated deletions—test points that are considered no longer required—are lower than assumed. Using the historical F-35 averages, we project that developmental testing may not be completed until May 2018, a 12-month delay from the program’s current plan. Table 2 provides a comparison of the assumptions used to determine delays in developmental testing. Our estimation of delays in completing developmental testing does not include the time it may take to address the significant number of existing deficiencies. The Marine Corps and Air Force declared IOC with limited capability and with several deficiencies. As of October 2016, the program had more than 1,200 open deficiencies, and senior program and test officials deemed 276 of those critical or of significant concern to the military services. Several of the critical deficiencies are related to the aircraft’s communications, data sharing, and target tracking capabilities. Although the final planned version of Block 3F software was released to flight testing in November 2016 and contained all 332 planned warfighting capabilities, not all of those capabilities worked as intended. In accordance with program plans, it was the first time some of the Block 3F capabilities had been tested. According to a recent report by the Director, Operational Test and Evaluation (DOT&E), fixes for less than half of the 276 deficiencies were included in the final planned version of Block 3F software. Prime contractor officials stated that additional software releases will likely be required to address deficiencies identified during the testing of the final planned version of Block 3F software, but they do not yet know how many releases will ultimately be needed. Delays in developmental testing will likely drive delays in current plans to start F-35 initial operational test and evaluation. Program officials have noted that according to their calculations developmental testing will end in October 2017 and initial operational testing will begin in February 2018. However, DOT&E officials, who approve operational test plans, anticipate that the program will more likely start operational testing in late 2018 or early 2019, at the earliest. Figure 4 provides an illustration of the current program schedule and DOT&E’s projected delays. DOT&E’s estimate for the start of initial operational testing is based on the office’s projection that developmental testing will end in July 2018 and that retrofits needed to prepare the aircraft for operational testing will not be completed until late 2018 at the earliest. There are 23 aircraft—many of which are early production aircraft—that require a total of 155 retrofits before they will be ready to begin operational testing. As of January 2017, 20 of those retrofits were not yet under contract, and program officials anticipated some retrofits would be completed in late 2018. To mitigate possible schedule delays, program officials are considering a phased start to operational testing. However, current program test plans require training and preparation activities before initial operational test and evaluation begins. Those activities, as outlined in the test plan, are expected to take approximately 6 months. Changes to this approach would require approval from DOT&E. According to DOT&E officials, however, the program has not yet provided any detailed strategy for implementing a new approach or identified a time frame for revising the test plan. Significant delays in initial operational testing will likely affect two other upcoming program decisions: (1) the Navy’s decision to declare IOC and (2) DOD’s decision to begin full-rate production. In a 2015 report to the congressional defense committees, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that the Navy’s IOC declaration is on track for February 2019 pending completion of initial operational test and evaluation. If initial operational testing does not begin until February 2019 as the DOT&E predicts, the Navy may need to consider postponing its IOC date. Likewise, DOD’s full-rate production decision, currently planned for April 2019, may have to be delayed. According to statute, a program may not proceed beyond low-rate initial production into full-rate production until initial operational test and evaluation is completed and DOT&E has submitted to the Secretary of Defense and the congressional defense committees a report that analyzes the results of operational testing. If testing does not begin until February 2019 and takes 1 year, as expected, DOD will not have the report in time to support a full-rate production decision by April 2019. The current delays in F-35 developmental testing will also result in increased development costs. Based on the program office’s estimate of a 5-month delay in developmental testing, the F-35 program will need an additional $532 million to complete the development contract. According to GAO best practices, credible cost estimates are also rooted in historical data. Using historical contractor cost data from April 2016 to September 2016, we calculated the average monthly cost associated with the development contract. If developmental testing is delayed 12 months, as we estimate, and operational testing is not completed until 2020, as projected by DOT&E, then we estimate that the program could need more than an additional $1.7 billion to complete the F-35 development contract. Similarly, the Cost Assessment and Program Evaluation office within the Office of the Secretary of Defense has estimated that the program will likely need more than $1.1 billion to complete the development contract. In these estimates, the majority of the additional funding would be needed in fiscal year 2018. Specifically, program officials believe that an additional $353.8 million may be needed in fiscal year 2018, while we estimate that they could need more than three times that amount— approximately $1.3 billion—as illustrated in figure 5. The program plans to fund their estimated development program deficit through several means. For example, although the program office 2018 preliminary budget projection reflected a reduction of $81 million in development funding over the next few years, as compared to DOD’s fiscal year 2017 budget request, program officials expect DOD to restore this reduction in its official fiscal year 2018 budget request. In addition, program officials plans to increase the budget request, as compared to their fiscal year 2017 budget request, for development funding in fiscal years 2018, 2019, and 2020 by $451 million and likewise reduce their budget request for procurement funding over those years. To make up for the reduction in requested procurement funding, the program plans to reprogram available procurement funds appropriated in prior fiscal years. Any additional funding beyond $451 million would likely have to come from some other source. Figure 5 compares DOD’s and our estimates for development funding needs from fiscal years 2018 through 2021. As developmental testing is delayed and DOD procures more aircraft every year, concurrency costs—the costs of retrofitting delivered aircraft—increase. For example, from 2015 to 2016, the program experienced a $70 million increase in concurrency costs. This increase was partially driven by the identification of new technical issues found during flight testing that were not previously forecasted, including problems with the F-35C outer-wing structure and F-35B landing gear. Problems such as these have to be fixed on aircraft that have already been procured. Thus far, DOD has procured 285 aircraft and has experienced a total of $1.77 billion in concurrency costs. Although testing is mostly complete, any additional delays will likely result in delays in the incorporation of known fixes, which would increase the number of aircraft that will require retrofits and rework and further increase concurrency costs as more aircraft are procured. According to program officials, most of the retrofits going forward are likely to be software related and thus less costly. However, according to DOD’s current plan, 498 aircraft will be procured by the time initial operational testing is complete. If the completion of operational testing is delayed to 2020, as DOT&E predicts, the number of procured aircraft will increase to 584 as currently planned, making 86 additional aircraft subject to any required retrofits or rework. In fiscal year 2018, F-35 program officials expect to invest more than $1.2 billion to start two efforts while simultaneously facing significant shortfalls in completing the F-35 baseline development program, as discussed above. Specifically, DOD and program officials project that in fiscal year 2018 the program will need over $600 million to begin development of follow-on modernization of the F-35 and more than $650 million to procure economic order quantities (EOQ) of parts to achieve cost savings during procurement. Contracting for EOQ generally refers to the purchase of parts in larger more economically efficient quantities to minimize the cost of these items. DOD officials emphasized that the specific amount of funding needed for these investments could change as the department finalizes its fiscal year 2018 budget request. Regardless, these investments may be premature. Early Block 4 requirements, which represent new capabilities beyond the original requirements, may not be fully informed before DOD plans to solicit proposals from contractors for how they might meet the government’s requirements—a process known as request for proposal (RFP). According to DOD policy, the Development RFP Release Decision Point is the point at which a solid business case is formed for a new development program. Until Block 3F testing is complete, DOD will not have the knowledge it needs to develop and present an executable business case for Block 4, with reliable cost and funding estimates. Due to evolving threats and changing warfighting environments, program officials project that the program will need over $600 million in fiscal year 2018 to award a contract to begin developing new F-35 capabilities, an effort referred to as follow-on modernization. However, the requirements for the first increment of that effort, known as Block 4, have not been finalized. Block 4 is expected to be developed and delivered in four phases—currently referred to as 4.1, 4.2, 4.3, and 4.4. Program officials expect phases 4.1 and 4.3 to be primarily software updates, while 4.2 and 4.4 consist of more significant hardware changes. The program has drafted a set of preliminary requirements for Block 4 that focused on the top-level capabilities needed in phases 4.1 and 4.2, but the requirements for the final two phases have not been fully defined. In addition, as of January 2017, these requirements had not been approved by the Joint Requirements Oversight Council. Delays in developmental testing of Block 3F are also likely to affect Block 4 requirements. DOD policy states that requirements are to be approved before a program reaches the Development RFP Decision Point in the acquisition process. GAO best practices emphasize the importance of matching requirements and resources in a business case before a development program begins. For DOD, the Development RFP Release Decision Point is the point at which plans for the program must be most carefully reviewed to ensure that all requirements have been approved, risks are understood and under control, the program plan is sound, and the program will be affordable and executable. Currently, F-35 program officials plan to release the RFP for Block 4.1 development in the third quarter of fiscal year 2017, nearly 1 year before we estimate Block 3F developmental testing will be completed. Program officials have stated that Block 3F is the foundation for Block 4, but continuing delays in Block 3F testing make it difficult to fully understand Block 3F functionality and its effect on early Block 4 capabilities. If new deficiencies are identified during the remainder of Block 3F testing, the need for new technologies may arise, and DOD may need to review Block 4 requirements again before approving them. In April 2016, we reported that the F-35 program office was considering what it referred to as a block buy contracting approach that we noted had some potential economic benefits but could limit congressional funding flexibility. The program office has since changed its strategy to consist of contracts for EOQ of 2 years’ worth of aircraft parts followed by a separate annual contract for procurement of lot 12 aircraft with annual options for lots 13 and 14 aircraft. Each of these options would be negotiated separately, similar to how DOD currently negotiates contracts. As of January 2017, details of the program office’s EOQ approach were still in flux. In 2015, the program office contracted with RAND Corporation to conduct a study of the potential cost savings associated with several EOQ approaches. According to the results of that study, in order for the government to get the greatest benefit, the aircraft and engine contractors would need to take on risk by investing in EOQ on behalf of the department in fiscal year 2017. Program officials envision that under this arrangement the contractors would be repaid by DOD at a later date. However, as of January 2017, contractors stated they were still negotiating the terms of this arrangement; therefore, the specific costs and benefits remained uncertain. Despite this uncertainty, the program office plans to seek congressional approval to make EOQ purchases and expects to need more than $650 million for that purpose in fiscal year 2018. Program officials believe that this upfront investment would result in a significant savings over the next few years for the U.S. services. However, given the uncertainties around the level of contractor investment, it is not clear whether an investment of more than $650 million, if that is the final amount DOD requests in fiscal year 2018, will be enough to yield significant savings. Regardless, with cost growth and schedule delays facing the F-35 baseline development program, it is unclear whether DOD can afford to fund this effort at this time. According to internal control standards, agencies should communicate with external stakeholders, such as Congress. With a potential investment of this size, particularly in an uncertain budget environment, it is important that program officials finalize the details of this approach before asking for congressional approval and provide Congress with a clear understanding of the associated costs to ensure that funding decisions are fully informed. The F-35 airframe and engine contractors continue to report improved manufacturing efficiency, and program data indicate that reliability and maintainability are improving in some areas. Over the last 5 years, the number of U.S. aircraft produced and delivered by Lockheed Martin has increased, and manufacturing efficiency and quality have improved over time. Similarly, manufacturing efficiency and quality metrics are improving for Pratt & Whitney. Although some engine aircraft reliability and maintainability metrics are not meeting program expectations, there has been progress in some areas, and there is still time for further improvements. Overall the airframe manufacturer, Lockheed Martin, is improving efficiency and product quality. Over the last 5 years, the number of aircraft produced and delivered by Lockheed Martin has increased from 29 aircraft in 2012 to 46 aircraft in 2016. Since 2011, a total of 200 production aircraft have been delivered to DOD and international partners, 46 of which were delivered in 2016. As of January 2017, 142 aircraft were in production, worldwide. As more aircraft are delivered, the number of labor hours needed to manufacture each aircraft declines. Labor hours decreased from 2015 to 2016, indicating production maturity. In addition, instances of production line work done out of sequence remains relatively low, with the exception of an increase at the end of 2016 due to technical issues, such as repairing coolant tube insulation (see app. III). Further, the number of quality defects and total hours spent on scrap, rework, and repair declined in 2016. Although data indicate that airframe manufacturing efficiency and quality continue to improve, supply chain challenges remain. Some suppliers are delivering late and non-conforming parts, resulting in production line inefficiencies and workarounds. For example, in 2016, Lockheed Martin originally planned to deliver 53 aircraft, but quality issues with insulation on the coolant tubes in the fuel tanks resulted in the contractor delivering 46 aircraft. According to Lockheed Martin officials, late deliveries of parts are largely due to late contract awards and supply base capacity. While supplier performance is generally improving, it is important for suppliers to be prepared for both production and sustainment support going forward. Inefficiencies, such as conducting production line work out of sequence, could be exacerbated if late delivery of parts continues as production more than doubles over the next 5 years. The engine manufacturer, Pratt & Whitney, is also improving efficiency. As of October 2016, Pratt & Whitney had delivered 279 engines. The labor hours required to assemble an F-35 engine decreased quickly and has remained relatively steady since around the 70th engine produced, and little additional efficiency is expected to be gained. Other Pratt & Whitney manufacturing metrics indicate that production efficiency and quality are improving. Scrap, rework, and repair costs were reduced from 2.22 percent in 2015 to 1.8 percent in 2016. We previously reported that according to Pratt & Whitney officials, moving from a hollow blade design to a solid blade would reduce scrap and rework costs because it is easier to produce. However, Pratt & Whitney experienced unanticipated problems with cracking in the solid blade design. As a result, Pratt & Whitney is continuing to produce a hollow blade while it further investigates the difficulty and costs associated with a solid blade design. Pratt & Whitney’s supply chain continues to make some improvements. For example, critical parts are being delivered ahead of schedule, and some are already achieving 2017 rate requirements. To further ensure that suppliers are capable of handling full-rate production, Pratt & Whitney is pursuing the potential to have multiple suppliers for some engine parts, which officials believe will help increase manufacturing capacity within the supply chain. Although the program has made progress in improving system-level reliability and maintainability, some metrics continue to fall short of program expectations in several key areas. For example, as shown in figure 6, while metrics in most areas were overall trending in the right direction, the F-35 program office’s internal assessment indicated that as of August 2016 the F-35 fleet was falling short of reliability and maintainability expectations in 11 of 21 areas. Although many of the metrics remain below program expectations, some of the metrics have shown improvement over the last year, and time remains for continued improvements. For example, our analysis indicates that since 2015, the F-35A reliability has improved from 4.3 mean flight hours between failure attributable to design issues to 5.7 hours, nearly achieving the goal at system maturity of 6 hours. The F-35A mean flight hours between maintenance event metric has also improved and is now meeting program expectations. As of August 2016, the F-35 fleet had only flown a cumulative total of 63,187 flight hours. The program has time for further improvement as the ultimate goals for these reliability and maintainability metrics are to be achieved by full system maturity, or 200,000 cumulative flight hours across the fleet. The program also plans to improve these metrics through additional design changes. Engine reliability varied in 2016. In April 2016, we reported that Pratt & Whitney had implemented a number of design changes that resulted in significant improvements to one reliability metric: mean flight hours between failure attributable to design issues. At the time of our report, contractor data indicated the F-35A and F-35B engines were at about 55 percent and 63 percent, respectively, of where the program expected them to be. According to contractor data as of September 2016, the program was unable to achieve a significant increase in reliability over the last year, which left the F-35A and F-35B engines further below expectations—at about 43 percent and 41 percent, respectively. Other reliability metrics such as engine’s impact on aircraft availability, engine maintenance man-hours, and the time between engine removals are meeting expectations. On average, from June 2016 through November 2016, the engine affected only about 1.47 percent of the overall aircraft availability rates, and none of the top 30 drivers affecting aircraft availability were related to the engine. According to Pratt & Whitney officials, the F-35 engine requires fewer maintenance man-hours per flight hour than legacy aircraft, and engines for the F-35A and F-35B are currently performing better than required for the average number of flight hours between engine removals. Program and contractor officials continue to identify ways to further improve reliability through a number of design changes and expect reliability to continue to improve lot over lot. As the F-35 program approaches the end of development, its schedule and cost estimates are optimistic. The program’s cost and schedule estimates to complete development are hundreds of millions of dollars below and several months under other independent estimates, including our own. If the program experiences schedule delays as we predict, it could require a total of nearly $1.5 billion in fiscal year 2018 alone. However, program officials project that the program will only need $576.2 million in fiscal year 2018 to complete baseline development. At the same time, program officials expect that more than $1.2 billion could be needed to commit to Block 4 and EOQ in fiscal year 2018. DOD must prioritize funding for the baseline development program over the program office’s desire for EOQ and Block 4. If baseline development is not prioritized and adequately funded, and costs increase as predicted by GAO and others, then the program will have less recourse for action and development could be further delayed. In addition, with baseline development still ongoing the program will not likely have the knowledge it needs to present a sound business case for soliciting contractor proposals for Block 4 development in fiscal year 2017. Although Block 4 and EOQ may be desirable, prioritizing funding for these efforts may not be essential at this time. Prioritizing funding for baseline development over these two efforts would ensure that the program has the time and money needed to properly finish development and thus lay a solid knowledge-based foundation for future efforts. To ensure that DOD adequately prioritizes its resources to finish F-35 baseline development and delivers all of the promised warfighting capabilities and that Congress is fully informed when making fiscal year 2018 budget decisions, we are making the following three recommendations to the F-35 program office through the Secretary of Defense. 1. Reassess the additional cost and time needed to complete developmental testing using historical program data. 2. Delay the issuance of the Block 4 development request for proposals at least until developmental testing is complete and all associated capabilities have been verified to work as intended. 3. Finalize the details of DOD and contractor investments associated with an EOQ purchase in fiscal year 2018, and submit a report to Congress with the fiscal year 2018 budget request that clearly identifies the details, including costs and benefits of the finalized EOQ approach. DOD provided us with written comments on a draft of this report. DOD’s comments are reprinted in appendix IV and summarized below. DOD also provided technical comments, which were incorporated as appropriate. DOD did not concur with our recommendation to reassess the additional cost and time needed to complete developmental testing using historical program data. DOD stated that it will continue to assess the assumptions and decisions made, and communicate any necessary adjustments relative to both cost and time needed to complete developmental testing. DOD also stated that it had considered historical data in its assessment and concluded that developmental testing could extend into February 2018. While this possible slip is noted in our report, it is unclear to us the extent to which the data underpinning DOD’s assessment reflected the program’s historical averages. While the program’s analysis that we examined did reflect test point accomplishment rates that were more aligned with what the program achieved in 2016 (i.e. around 290 points per month) those rates were still higher than the historical average. Other key inputs to that analysis also differed significantly from the program’s historical averages. For example, program officials assumed only a 42 percent test point growth rate when the program’s historical average test point growth was 63 percent, and in 2016 alone the test point growth rate was 115 percent. Several other DOD officials have identified possible delays beyond February 2018. In a memo sent to Congress in December 2016, the Under Secretary of Defense for Acquisition, Technology and Logistics stated that developmental testing could go as long as May 2018, and DOT&E analysis also indicates that developmental testing may not conclude until mid-2018. We continue to believe that our recommendation is valid. DOD also did not concur with our recommendation to delay the issuance of the Block 4 development request for proposals until developmental testing is complete. According to DOD, delaying the request for proposals could unnecessarily delay delivery of needed capabilities to the warfighters. However, as program officials stated, Block 3F software establishes the foundation for Block 4. Therefore, continuing delays in Block 3F testing will likely make it difficult to fully understand Block 3F functionality and its effect on early Block 4 requirements. If new deficiencies are identified during the remainder of Block 3F testing, the need for new technologies may arise, and DOD may need to review Block 4 requirements again before approving them which could lead to additional delays. Therefore, we continue to believe that our recommendation is valid. DOD stated that it partially concurred with our third recommendation to finalize the details of investments associated with an EOQ purchase in fiscal year 2018, and submit a report to Congress with the fiscal year 2018 budget request that clearly identifies those details. However, in its response, the department outlined steps that address it. For example, DOD stated that it had finalized the details of DOD and contractor investments associated with an EOQ purchase and will brief Congress on the details, including costs and benefits of the finalized EOQ approach. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; and the Under Secretary of Defense for Acquisition, Technology and Logistics. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To assess the F-35 program’s remaining development and testing we interviewed officials from the program office and contractors—Lockheed Martin and Pratt & Whitney. We obtained and analyzed data on mission systems test point execution, both planned and accomplished from 2011 through 2016 to calculate historical test point averages per month. We compared test progress against the total program requirements to determine the number of test points that were completed and remaining as of December 2016. We used the average test point rate based on the historical data to determine the number of months needed to complete the remaining test points. To identify the program’s average monthly costs, we analyzed contractor cost performance data from April 2016 through September 2016 to identify average contract costs per month. Using a 12-month delay and the average contract costs per month, we calculated the costs to complete developmental testing. In order to determine costs to complete development, we first determined the percent change, year to year, in the program office’s development funding requirement estimate from 2018 to 2021. We then reduced our estimate using those percentages from 2018 to 2021. We discussed key aspects of F-35 development progress, including flight testing progress, with program management and contractor officials as well as DOD test officials and program test pilots. To assess the reliability of the test and cost data, we reviewed the supporting documentation and discussed the development of the data with DOD officials instrumental in producing them. In addition, we interviewed officials from the F-35 program office, Lockheed Martin, Pratt & Whitney, and the Director, Operational Test and Evaluation office to discuss development test plans, achievements, and test discoveries. To assess DOD’s proposed plans for future F-35 investments, we discussed cost and manufacturing efficiency initiatives, such as the economic order quantities approach, with contractor and program office officials to understand potential cost savings and plans. To assess the program’s follow-on modernization plans, we discussed the program’s plans with program office officials. We reviewed the fiscal year 2017 budget request to identify costs associated with the effort. We also reviewed and analyzed best practices identified by GAO and reviewed relevant DOD policies and statutes. We compared the acquisition plans to these policies and practices. To assess ongoing manufacturing and supply chain performance, we obtained and analyzed data related to aircraft delivery rates and work performance data from January 2016 to December 2016. These data were compared to program objectives identified in these areas and used to identify trends. We reviewed data and briefings provided by the program office, Lockheed Martin, Pratt & Whitney, and the Defense Contract Management Agency in order to identify issues in manufacturing processes. We discussed reasons for delivery delays and plans for improvement with Lockheed Martin and Pratt & Whitney. We collected and analyzed data related to aircraft quality through December 2016. We collected and analyzed supply chain performance data and discussed steps taken to improve quality and deliveries with Lockheed Martin and Pratt & Whitney. We also analyzed reliability and maintainability data and discussed these issues with program and contractor officials. We assessed the reliability of DOD and contractor data by reviewing existing information about the data and interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As developmental testing nears completion, the F-35 program continues to address technical risks. The program has incorporated design changes that appear to have mitigated several of the technical risks that we have highlighted in prior reports, including problems with the arresting hook system and bulkhead cracks on the F-35B. However, over the past year, the program continued to address risks with the Helmet Mounted Display, Autonomic Logistics Information System (ALIS), the ejection seat and engine seal that we have identified in the past. The program also identified new risks with the F-35C wing structure and catapult launches, and coolant tube insulation. The status of the Department of Defense’s (DOD) efforts to address these issues is as follows: Helmet Mounted Display: A new helmet intended to address shortfalls in night vision capability, among other things, was developed and delivered to the program in 2015. Developmental testing of the new helmet is mostly complete, and officials believe that issues such as latency and jitter have been addressed. Green glow, although improved, continues to add workload for the pilots when landing at sea. Officials believe that they have done as much as they can to fix the green glow problems with the hardware currently available. ALIS: ALIS continues to lack required capabilities; for instance, engine parts information is not included in the current version of ALIS, although it is expected to be completed in the spring of 2017. In 2016, officials began testing ALIS in an operational environment which has led to some improvements. However, capabilities, including the prognostics health management downlink, have been deferred to follow-on modernization. In 2016, officials acknowledged compounding development delays and restructured the development schedule for ALIS. The new schedule shows that some capabilities that were planned in the earlier versions of ALIS will now be deferred to later versions. In April 2016, we reported that F-35 pilots and maintainers identified potential functionality risks to ALIS and that DOD lacked a plan to address these risks as key milestone dates approached, which could result in operational and schedule risks. Engine seal: Officials have identified a design change to address the technical problem that resulted in an engine fire in June 2014. This design change was validated and incorporated into production in 2015. Engine contractor officials identified 194 engines that needed to be retrofitted, and as of October 2016, 189 of those retrofits had been completed. The engine contractor, Pratt & Whitney, is paying for these retrofits. Ejection seat: In 2015, officials discovered that pilots who weigh less than 136 pounds could possibly suffer neck injuries during ejection. Officials stated that the risk of injury is due to the over-rotation of the ejection seat in combination with the thrust from the parachute deployment during ejection. Officials noted that although the problem was discovered during testing of the new Helmet Mounted Display, the helmet’s weight was not the root cause. The program has explored a number of solutions to ensure pilot safety including installing a switch for light-weight pilots that would slow the release of the parachute deployment, installing a head support panel that would reduce head movement, and reducing the weight of the helmet. The final design completed qualification testing in 2016 and is expected to be incorporated into production lot 10. The cost of these changes has not yet been determined. F-35C outer-wings: In 2016, officials identified structural issues on the F- 35C outer-wing when carrying an AIM-9X missile. In order to resume the test program, officials identified a design change to include strengthening the wings’ material that was incorporated onto a test aircraft. Officials expect to incorporate retrofits to delivered aircraft by 2019 and will incorporate changes into production in lot 10. F-35C catapult launches: In 2016, officials identified issues with violent, uncomfortable, and distracting movement during catapult launches. Specifically, officials stated that the nose gear strut moves up and down as an aircraft accelerates to takeoff, which can cause neck and jaw soreness for the pilot because the helmet and oxygen mask are pushed back on the pilot’s face during take-off. This can be a safety risk as the helmet can hit the canopy, possibly resulting in damage, and flight critical symbology on the helmet can become difficult to read during and immediately after launch due to the rotation of the helmet on the pilot’s head. Officials evaluated several options for adjusting the nose gear to alleviate the issue, but determined that none of the options would significantly affect the forces felt by the pilot. Officials subsequently assembled a team to identify a root cause and a redesign. According to officials, adjustments to the catapult system load settings are being considered to address this issue, and a design change to the aircraft may not be required. But flight testing of the proposed changes is required to confirm this solution. Insulation on coolant tubes: During maintenance on an aircraft in 2016, officials found that insulation around coolant tubes within the aircraft’s fuel system were cracking and contaminating the fuel lines. According to officials, the problem was a result of a supplier using the incorrect material for insulation. The faulty insulation was installed on 57 aircraft— including the entire Air Force initial operational capability fleet—which were prohibited from flight until the insulation was removed. Officials determined that the insulation would not need to be replaced as the aircraft meets specifications without it. Officials are considering removing the insulation from the tubes across the rest of the aircraft going forward. As of January 2017, all of the fielded aircraft have been repaired and returned to flight. In addition to the contact named above, the following staff members made key contributions to this report: Travis Masters (Assistant Director), Emily Bond, Raj Chitikila, Kristine Hassinger, Karen Richey, Jillena Roberts, Megan Setser, Hai Tran, and Robin Wilson. F-35 Joint Strike Fighter: Continued Oversight Needed as Program Plans to Begin Development of New Capabilities. GAO-16-390. Washington, D.C.: April 14, 2016. F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System. GAO-16-439. Washington, D.C.: April 14, 2016. F-35 Joint Strike Fighter: Preliminary Observations on Program Progress. GAO-16-489T. Washington, D.C.: March 23, 2016. F-35 Joint Strike Fighter: Assessment Needed to Address Affordability Challenges. GAO-15-364. Washington, D.C.: April 14, 2015. F-35 Sustainment: Need for Affordable Strategy, Greater Attention to Risks, and Improved Cost Estimates. GAO-14-778. Washington, D.C.: September 23, 2014. F-35 Joint Strike Fighter: Slower Than Expected Progress in Software Testing May Limit Initial Warfighting Capabilities. GAO-14-468T. Washington, D.C.: March 26, 2014. F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities. GAO-14-322. Washington, D.C.: March 24, 2014. F-35 Joint Strike Fighter: Restructuring Has Improved the Program, but Affordability Challenges and Other Risks Remain. GAO-13-690T. Washington, D.C.: June 19, 2013. F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain. GAO-13-500T. Washington, D.C.: April 17, 2013. F-35 Joint Strike Fighter: Current Outlook Is Improved, but Long-Term Affordability Is a Major Concern. GAO-13-309. Washington, D.C.: March 11, 2013. Fighter Aircraft: Better Cost Estimates Needed for Extending the Service Life of Selected F-16s and F/A-18s. GAO-13-51. Washington, D.C.: November 15, 2012. Joint Strike Fighter: DOD Actions Needed to Further Enhance Restructuring and Address Affordability Risks. GAO-12-437. Washington, D.C.: June 14, 2012. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-12-400SP. Washington, D.C.: March 29, 2012. Joint Strike Fighter: Restructuring Added Resources and Reduced Risk, but Concurrency Is Still a Major Concern. GAO-12-525T. Washington, D.C.: March 20, 2012. Joint Strike Fighter: Implications of Program Restructuring and Other Recent Developments on Key Aspects of DOD’s Prior Alternate Engine Analyses. GAO-11-903R. Washington, D.C.: September 14, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Is Still Lagging. GAO-11-677T. Washington, D.C.: May 19, 2011. Joint Strike Fighter: Restructuring Places Program on Firmer Footing, but Progress Still Lags. GAO-11-325. Washington, D.C.: April 7, 2011. Joint Strike Fighter: Restructuring Should Improve Outcomes, but Progress Is Still Lagging Overall. GAO-11-450T. Washington, D.C.: March 15, 2011. | The F-35 Joint Strike Fighter is DOD's most expensive and ambitious acquisition program. Acquisition costs alone are estimated at nearly $400 billion, and beginning in 2022, DOD expects to spend more than $14 billion a year on average for a decade. The National Defense Authorization Act for Fiscal Year 2015 included a provision for GAO to review the F-35 acquisition program annually until the program reaches full-rate production. This, GAO's second report in response to that mandate, assesses, among other objectives, (1) progress of remaining program development and testing and (2) proposed future plans for acquisition investments. To conduct this work, GAO reviewed and analyzed management reports and historical test data; discussed key aspects of F-35 development with program management and contractor officials; and compared acquisition plans to DOD policy and GAO acquisition best practices. Cascading F-35 testing delays could cost the Department of Defense (DOD) over a billion dollars more than currently budgeted to complete development of the F-35 baseline program. Because of problems with the mission systems software, known as Block 3F, program officials optimistically estimate that the program will need an additional 5 months to complete developmental testing. According to best practices, credible estimates are rooted in historical data. The program's projections are based on anticipated test point achievements and not historical data. GAO's analysis—based on historical F-35 flight test data—indicates that developmental testing could take an additional 12 months (see table below). These delays could affect the start of the F-35's initial operational test and evaluation, postpone the Navy's initial operational capability, and delay the program's full rate production decision, currently planned for April 2019. Program officials estimate that a delay of 5 months will contribute to a total increase of $532 million to complete development. The longer delay estimated by GAO will likely contribute to an increase of more than $1.7 billion, approximately $1.3 billion of which will be needed in fiscal year 2018. Meanwhile, program officials project the program will need over $1.2 billion in fiscal year 2018 to start two efforts. First, DOD expects it will need over $600 million for follow-on modernization (known as Block 4). F-35 program officials plan to release a request for Block 4 development proposals nearly 1 year before GAO estimates that Block 3F—the last block of software for the F-35 baseline program—developmental testing will be completed. DOD policy and GAO best practices state that requirements should be approved and a sound business case formed before requesting development proposals from contractors. Until Block 3F testing is complete, DOD will not have the knowledge it needs to present a sound business case for Block 4. Second, the program may ask Congress for more than $650 million in fiscal year 2018 to procure economic order quantities—bulk quantities. However, as of January 2017 the details of this plan were unclear because DOD's 2018 budget was not final and negotiations with the contractors were ongoing. According to internal controls, agencies should communicate with Congress, otherwise it may not have the information it needs to make a fully informed budget decision for fiscal year 2018. Completing Block 3F development is essential for a sound business case and warrants funding priority over Block 4 and economic order quantities at this time. GAO recommends that DOD use historical data to reassess the cost of completing development of Block 3F, complete Block 3F testing before soliciting contractor proposals for Block 4 development, and identify for Congress the cost and benefits associated with procuring economic order quantities of parts. DOD did not concur with the first two recommendations and partially concurred with the third while outlining actions to address it. GAO continues to believe its recommendations are valid, as discussed in the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal government makes loans to students through private- and public-sector lenders in the FFELP or directly to students through FDLP. These two programs are among the largest of the federal government’s credit programs. At the end of 2004, there were about $245 billion in outstanding FFELP loans, about 20 percent of total federal guaranteed loans outstanding, and $107 billion in outstanding FDLP loans, about 43 percent of total federal direct loans outstanding. Students and parents are able to borrow the same types of loans through FFELP and FDLP, which include: Subsidized and Unsubsidized Stafford Loans—variable rate loans available to students. The federal government pays the interest on behalf of subsidized loan borrowers while the student is in school and during a brief grace period when the student first leaves school. PLUS Loans—variable rate loans made to parents, on behalf of students. The borrower pays all interest costs. Consolidation Loans—borrowers may combine multiple federal student loans into a single loan. The interest rate is fixed based on the weighted average of the interest rates in effect on the loans being consolidated. Under either loan program borrowers are able to repay loans earlier than required, with no penalty. The programs have several repayment options available to borrowers. For Stafford and PLUS loans, the standard repayment in both loan programs is a fixed amount per month for up to 10 years. Borrowers have other repayment options that allow them to extend repayment for up to 30 years, gradually increase the monthly payment, or base monthly payments on their adjusted gross income. The criteria for some of the alternative repayment options are different in FFELP and FDLP. For consolidation loans, the repayment terms depend on the loan amount. Moreover, borrowers that graduate, leave school, or become a less than half-time student are given a 6-month grace period before they must begin to repay their Stafford or consolidation loans. All borrowers may postpone repayment through deferment or forbearance if they meet certain criteria and the loan is not in default. Deferment is allowed for borrowers who remain in a postsecondary school at least half- time, a graduate program, or have experienced economic hardship. For borrowers who are temporarily unable to meet repayment obligations but are not eligible for deferment, lenders may grant a temporary and limited time period in which these borrowers do not need to repay their student loans, called forbearance. The FCRA guidance issued by OMB and accounting standards provide the framework for the process Education uses to calculate subsidy costs for student loans. Subsidy costs are calculated by estimating the federal government’s future cash flows for loans made or guaranteed in a particular fiscal year, called a loan cohort. In estimating cash flows for a loan cohort, Education must make assumptions about loan characteristics and future borrower behavior, such as: type and dollar amount of loans obligated or guaranteed, and how many borrowers will pay early, pay late, or default on their loans and at what point in time. Moreover, the model used to estimate future cash flows includes assumptions about future interest rates. OMB provides Education with interest rate assumptions that are used for the discount rate, borrower interest rate, and lender yields. Education aggregates cash flows by loan cohort, loan type, and risk category, which reflects the differences in the likelihood of default. Education has five risk categories, which include, in order of higher to lower risk of default: (1) students at proprietary schools, (2) students at 2-year colleges, (3) freshman and sophomores at 4-year colleges, (4) juniors and seniors at 4-year colleges, and (5) students at graduate schools. Although the method for calculating the subsidy cost is the same for both FFELP and FDLP, the federal government’s role in each loan program differs significantly, which, in turn, affects the type and timing of cash flows in each program. In FFELP, private lenders, such as banks, fund the loans, and the federal government guarantees lenders a statutorily specified minimum yield that is tied to, and varies with, market financial instruments. When the interest rate paid by borrowers is below that yield, the federal government gives lenders subsidy payments, called SAP. Moreover, the federal government, through state-designated guaranty agencies, guarantees repayment of loans if borrowers default. Guaranty agencies provide insurance to lenders for 98 percent of the unpaid principal of defaulted loans. The federal government, in turn, pays guaranty agencies 95 percent of their default claims. Guaranty agencies also perform various administrative functions in the FFELP. As shown in figure 1, under FFELP cash inflows to the federal government include fees and other payments from lenders and outflows from the federal government include SAP and default payments. FFELP cash flows are spread out over the life of the loan. fees) Under FDLP, the U.S. Treasury funds the loans, which are originated through participating schools and contractors. Education’s Office of Federal Student Aid is responsible for delivering funds to schools participating in FDLP, monitoring its contracts, and providing technical assistance to schools. Education contracts with private-sector companies to perform various administrative activities in FDLP, such as originating and servicing loans, and collecting defaulted loans. As shown in figure 2, FDLP cash inflows to the federal government are repayments of principal and interest payments and outflows include loan disbursements to borrowers. Because the federal government funds the loans, cash outflows occur in the early years as loan disbursements are made. Cash inflows, in the form of principal repayment and interest payments, occur in later years as borrowers enter repayment. Principal repayments may be less than disbursements, reflecting defaults, loan discharges, and loan forgiveness. Annually, agencies are generally required to update or “reestimate” loan costs for differences in estimated loan performance, such as differences between assumed and actual default rates, the actual program costs recorded in the accounting records, and new forecasts of future economic conditions, such as interest rates. Reestimates include all aspects of the original cost estimate, including prepayments, defaults, delinquencies, recoveries, and interest. Reestimates of the credit subsidy allow agency management to compare the original budget estimates with actual program results to identify variances from the original estimate, assess the quality of the original estimate, and adjust future program estimates as appropriate. Both FFELP and FDLP reestimated subsidy costs have differed from original estimates for loans made in fiscal years 1994 through 2004, highlighting the challenges in estimating the costs of federal student loans. FFELP reestimated subsidy costs were similar to or lower than original estimates for loans made in fiscal years 1994 to 2002, but higher than originally estimated for loans made in fiscal years 2003 and 2004. In comparison, FDLP reestimated subsidy costs were generally similar to or higher than original estimates for loans made in fiscal years 1994 through 2004. Across all types of loans, FDLP subsidy costs per $100 of loans disbursed were, for almost all loan cohorts, lower than those of FFELP. Reestimated subsidy costs for FFELP loans disbursed between fiscal years 1994 and 2002 were, in general, close to or lower than original estimates, while reestimated subsidy costs for loans disbursed in 2003 and 2004 were higher than originally expected, as shown in figure 3. From fiscal years 1994 to 1999, reestimated subsidy costs for FFELP were typically close to original estimates, while loans disbursed from fiscal year 2000 to fiscal year 2002 had reestimated subsidy costs that were lower than original estimates, ranging from $1.5 to $2.2 billion lower. Reestimated subsidy costs for loans disbursed in fiscal years 2003 and 2004 were $2.7 and $3.6 billion higher than original estimates. Differences between reestimated and original subsidy costs estimates for the 2003 and 2004 loan cohorts were in part due to significant differences between expected and actual loan volume. For example, Education originally estimated about $40 billion in FFELP loans would be disbursed in 2003 when actually $69 billion was disbursed that year. The large difference was primarily due to a significantly higher volume of FFELP consolidation loans than originally estimated and the relatively high subsidy costs per $100 of these loans compared to consolidation loans made in previous years. After controlling for loan volume, FFELP reestimated subsidy costs per $100 disbursed were generally close to or lower than original subsidy cost estimates across loan types. As shown in table 1, for FFELP Stafford unsubsidized and PLUS loans, reestimated subsidy costs per $100 disbursed were lower for all loan cohorts than what was originally estimated—except fiscal year 1999. For subsidized Stafford loans, about two-thirds of the loan cohorts had lower reestimated subsidy costs per $100 disbursed. Slightly over half of all consolidation loan cohorts had lower reestimated subsidy costs per $100 disbursed than originally estimated. Reestimated subsidy costs for FDLP loans were in general similar to or higher than original estimates for loans disbursed between fiscal years 1994 and 2004. For FDLP loans disbursed between fiscal years 1994 and 1999, total reestimated subsidy costs were in general close to original estimates, but there was one loan cohort that had higher reestimated subsidy costs and another with much lower reestimated subsidy costs than originally expected, as shown in figure 4. In comparison, reestimated subsidy costs for FDLP loans disbursed between fiscal years 2000 and 2004 were higher than original estimates. In some cases original estimates projected a net gain for the government, but subsequent reestimates project a smaller gain or even a net cost for the government. For example, original subsidy cost estimates of the fiscal year 2000 loan cohort projected a net gain of $930 million for the government and reestimated subsidy costs project a net cost of $1.1 billion. Such swings in estimated subsidy costs illustrate that originally anticipated federal revenues may not, in fact, ultimately materialize. Differences between total reestimated and original subsidy cost estimates were not driven by differences between original and actual loan volume, but rather by changes in the subsidy rates—that is, subsidy costs per $100 disbursed. FDLP reestimated subsidy costs per $100 disbursed were usually close to or higher than original subsidy cost estimates across loan types. For example, as shown in table 2, reestimated subsidy costs per $100 disbursed for FDLP Stafford unsubsidized, and PLUS loans were, for almost all loan cohorts, higher than original estimates. For Stafford subsidized and consolidation loans, slightly over half of the loan cohorts had reestimated subsidy costs that were higher than originally estimated. For most Stafford unsubsidized and PLUS loan cohorts, and slightly over half of consolidation loan cohorts, reestimated subsidy costs per $100 disbursed were higher than the original estimate, but still project a net gain for the federal government. For example, Stafford unsubsidized loans disbursed in fiscal year 1998 were originally estimated to have a net gain of $6.93 for every $100 in loans disbursed. Reestimated subsidy costs show that the projected net gain for these same loans is estimated to be $5.13 per $100 disbursed. Some loan cohorts that originally projected a net gain for the federal government have reestimated subsidy costs with a net cost to the government. For example, PLUS loans disbursed in fiscal year 2000 that were originally projected to have a net gain of $13.41 per $100 disbursed were subsequently reestimated to have a net cost of $2.21 per $100 disbursed. For all loans disbursed between fiscal years 1994 and 2004, FDLP reestimated subsidy costs were lower than FFELP reestimated subsidy costs in aggregate and after controlling for loan volume. Reestimated total subsidy costs for FDLP loans were $2.5 billion compared to $36.6 billion for FFELP loans, as shown in table 3 below. After controlling for loan volume and comparing reestimated subsidy costs across the four types of loans—Stafford subsidized and unsubsidized, PLUS, and consolidation—FDLP reestimated subsidy costs per $100 disbursed were in general lower than FFELP reestimated subsidy costs per $100 disbursed. (See app. I for comparisons of reestimated subsidy costs of FDLP and FFELP loans, by loan type.) The difference between the reestimated subsidy cost for FDLP and FFELP varied significantly and depended on the type of loan and the year that the loan was disbursed. For example, reestimated subsidy costs per $100 disbursed for FDLP subsidized Stafford loans disbursed in fiscal year 2003 were $11.66 lower than for FFELP subsidized Stafford loans, while the difference for the same loans disbursed in 2000 was $1.35 per $100 disbursed. The primary reason for the difference in subsidy cost estimates between FFELP and FDLP were differences in the structure of the programs rather than the characteristics of the borrowers. According to Education officials, estimates of long-term costs associated with subsidizing borrowers’ interest; canceling repayment of loans due to death, disability, and bankruptcy; and defaulted loans are roughly equivalent in both programs. However, under FFELP there are larger cash outflows in the form of SAP to lenders than cash inflows of lender fees, while in FDLP there are large cash inflow projections, net of interest payments to Treasury, in the form of borrower interest payments and no SAP or guaranty fees. Differences between original and reestimated subsidy cost estimates per $100 disbursed can be explained, in part, by lower than expected market interest rates, greater than anticipated loan consolidation, and more data on student loans incorporated into cash flow model. Differences between actual and expected interest rates and rates of consolidations affected reestimated subsidy costs for each loan program in a different way. For example, lower than expected interest rates over the last several years have resulted in lower reestimated subsidy cost estimates for FFELP and higher reestimated subsidy costs for FDLP. Larger than expected volumes of consolidation loans, which stemmed in part from low interest rates, contributed to lower FFELP reestimated subsidy costs for the underlying loan cohorts and higher FDLP reestimated subsidy cost estimates of the underlying loan cohorts. Furthermore, the availability of additional data for both FFELP and FDLP loans have enabled Education to refine its cash flow model, which has also contributed to differences between reestimated and original subsidy costs. Interest rates fell to lower than expected levels in 2001 and persisted at those levels through 2004, which affected subsidy cost estimates in both FFELP and FDLP because estimates, especially for the FDLP, are highly sensitive to changes between projected and actual interest rates. Cost estimates for the loan programs are sensitive to such changes because borrower interest rates in both FFELP and FDLP and the lender yield in the FFELP, are variable rates. As a result, differences between projected and actual interest rates can have a significant impact on estimates of cash flows in both loan programs. OMB’s interest rate projections made prior to 2001, as well as those by other government agencies and the private sector, were considerably higher than actual interest rates for 2001 and beyond. For example, as shown in table 4, actual interest rates from 2001 to 2003 were substantially lower than OMB’s forecasts of interest rates used in the budget for fiscal year 1999 and fluctuated slightly from year to year. To the degree that such fluctuations were unanticipated, they contributed to volatility in subsidy cost reestimates from year to year. For FFELP, lower than expected interest rates have resulted in lower than expected SAP to lenders, which, in turn, resulted in lower reestimated subsidy cost estimates. As interest rates decreased, the difference, or spread, between the 3-month commercial paper (CP) and the 91-day Treasury bill narrowed. For example, as can be seen in figure 5, the average rates on the 91-day T-bill and the 3-month CP were 5.82 and 6.33, respectively, in 2000, a difference of 0.51. However, in 2004 the difference between the two rates was 0.15. The spread between commercial paper and Treasury bill rates serves as the primary basis for SAP payments to the lenders, and, as the spread narrowed, Education paid lower SAP, thus lowering reestimated subsidy costs. The climate of declining interest rates not only narrowed the spread between the T-bill rate and the CP rate and reduced SAP payments, it also eliminated SAP payments for some loans because interest rates paid by borrowers were higher than the guaranteed lender yield. Whether SAP is paid on a loan can change during a year because borrower interest rates are adjusted annually based on the final auction of T-bills before June 1 of each year while lender yields are adjusted each quarter. Thus in a climate of declining interest rates, SAP on certain loans was eliminated because the 3-month CP rate—on which the lender yield is based—fell, for a particular quarter, below the annually adjusted borrower rate. SAP was zero in 50 percent of the quarters for Stafford loans issued after January 1, 2000 through July 1, 2005. This is illustrated in figure 6, where one can also see that the more recent climate of rising interest rates could lead to increased SAP. In contrast, lower than expected interest rates contributed to higher reestimated FDLP subsidy costs. Under FDLP, the government had originally anticipated larger interest payments from borrowers as they repaid their loans because original subsidy cost estimates were based on forecasts that did not anticipate the significant decline in interest rates. Lower than expected interest rates thus resulted in lower than expected cash inflows to the government and higher FDLP subsidy cost reestimates. For example, using the numbers in table 4, one can see that original subsidy cost estimates made for the 1999 loan cohort assumed that interest rates on the 91-day Treasury bill would be 4 times higher than they actually were when some students would be entering repayment on loans they obtained in 1999. Moreover, original estimates were based on the assumption that the interest rate paid by borrowers on those loans would be higher than the interest rate Education pays to Treasury for borrowing the funds to make the loans. As can be seen in figure 7, the borrower interest rate fell below the discount rate (rate paid to Treasury) in 2001. Again, such a climate of lower than anticipated interest rates led to higher reestimates of subsidy costs. As interest rates rise, the interest paid by borrowers will increase–possibly to rates higher than the discount rate. Lower than expected interest rates also affected the actual rate used to discount cash flows for FFELP and FDLP subsidy cost estimates. When subsidy cost estimates are first prepared for the budget, agencies use an estimated discount rate. Education sets the actual discount rate when a loan cohort is fully disbursed. Because subsidy cost estimates are prepared prior to when a loan is disbursed, it is expected that differences between the estimated and actual discount rate will contribute to differences between reestimated and original subsidy cost estimates. For example, the actual discount rate for loans disbursed in fiscal year 2002 was lower than originally estimated, which lowered reestimated subsidy costs slightly in both FFELP and FDLP. Higher than expected consolidation volume, which stemmed in part from low interest rates, also affected reestimated subsidy costs. As we have previously reported, the number of borrowers consolidating their loans has increased substantially over the last several years. Consolidation activity has been higher than expected in both loan programs since fiscal year 1999. When borrowers consolidated their student loans and locked in recent low interest rates, they effectively paid off the underlying loans— Stafford subsidized and unsubsidized and PLUS—ahead of schedule and started a new consolidation loan. With the new consolidation loans, borrowers began new repayment periods that could be up to 30 years from when the consolidation loans were made. Because Education calculates subsidy costs for consolidation loans separately, it must adjust original estimates of the underlying loans to reflect unanticipated prepayments. Education considers the consolidation a new loan in the year that the loan was disbursed. Figures 8 and 9 provide a simplified example of consolidation from both the borrower’s and Education’s perspective. Consolidation activity has been particularly high for FFELP loans, increasing from about $7 billion in fiscal year 2000 to $37 billion in fiscal year 2004. Education had not anticipated such an increase in consolidation loans, which contributed to lower reestimated subsidy costs for the underlying loan cohorts. Under FFELP, consolidation loans shortened the length of time Education anticipated paying SAP to lenders and eliminated default risk on the underlying loans, thus lowering reestimated subsidy costs. Estimated subsidy costs for recent consolidation cohorts, which reflect costs associated with default risk and SAP to lenders, are quite large in comparison to previous consolidation loan cohorts. For example, reestimated subsidy costs per $100 disbursed for consolidation loans made in 2003 were $11.21 and in 2004 were $15.98 compared to $3.11 for consolidation loans made in 2002. The increase is due in part because borrowers locked in lower fixed interest rates on their consolidation loans and the minimum yield guaranteed to lenders is projected to be much higher than the fixed interest rate paid by borrowers, thus requiring the government to pay higher SAP than they would have on the 2002 loans. Consolidation activity in FDLP also increased—from $5 billion in fiscal year 2000 to $8 billion in fiscal year 2004. As borrowers consolidated their loans, they repaid the underlying loans that shortened the length of time Education had expected to receive interest payments on these loans. According to Education, it had calculated that the interest payments from borrowers would contribute positively to Education’s cash flows because expected interest rates that borrowers paid to Education were higher than the rate Education paid to borrow the funds. However, greater than expected prepayment due to consolidation decreased the anticipated interest payments on the underlying loans, which in turn contributed to higher reestimated subsidy cost estimates of the underlying loan cohorts. Moreover, as we reported in August 2004, large amounts of FDLP loans— about $7.5 billion between 1998 and 2002—were consolidated into FFELP. As a result, Education will not receive any of the future projected interest payments on those loans that are now FFELP loans, which also contributed to higher reestimated FDLP subsidy costs. Additionally, for the FDLP loans consolidated into FFELP, the government may need to pay SAP that it otherwise would not have had to pay. More data for both FFELP and FDLP loans has allowed Education to make refinements to its cash flow model, a result of changes made by Education to address recommendations in our prior reports and by Education’s auditors. The addition of data about borrower behavior to the cash flow model has also contributed to the differences between reestimated and original subsidy costs. For example, Education officials reported that in recent years, data on FFELP and FDLP borrowers’ use of deferment options, which allow them to delay making payments on a loan when they return to school or are experiencing economic hardship, has become available. With this data Education is able to explicitly include in its model the number of students using deferment options and project the effect on cash flows in both FFELP and FDLP, rather than implicitly including deferments in its model through adjustments in the length of time a loan was expected to be in repayment. According to Education officials, more FFELP borrowers than they had predicted have used deferment options and, when this data was incorporated into FFELP’s cash flow model, it contributed to an increase in reestimated FFELP subsidy costs of $5 billion in fiscal year 2003. Education reported that deferment data will be added to the FDLP cash flow model and will be reflected in reestimated subsidy costs in the fiscal year 2007 Budget of the United States Government. Education also noted that more data has become available in FDLP because the program has been in existence for 10 years and in FFELP because of improvements made by guaranty agencies. Previously, Education had based its FDLP cash flow assumptions on FFELP data, but Education now has data on when borrowers default or enter repayment based on FDLP borrowers. According to Education, actual defaults in FDLP have not been much different from the assumptions made using FFELP data because defaults are best predicted by the borrower and the type of school attended rather than from which loan program the student borrowed. According to Education officials, guaranty agencies—that are responsible for reporting on the status of a loan, i.e., in repayment, deferred, defaulted, or in-school—have made changes in their data systems and the quality checks on the data. As a result, Education has been better able to estimate default rates, subsequent collections, and their effect on cash flows in FFELP. In particular, Education noted that there have been improvements in the data Education uses in estimating of collections of defaulted loans in both FFELP and FDLP, which showed higher than originally estimated collections and contributed to lower reestimated subsidy costs. Additional federal costs and revenues associated with the student loan programs, such as federal administrative expenses, some costs of risk associated with lending money over time, and federal tax revenues generated by both student loan programs are not included in subsidy cost estimates. These are important factors to consider when determining costs of the student loan programs; however, they are difficult to measure. Under current law, federal administrative expenses are excluded from subsidy cost estimates. In addition, subsidy cost estimates do not explicitly include all risk that the government incurs by lending money over time. Moreover, both loan programs generate federal tax revenues that are not included in subsidy cost calculations. Under FCRA, federal administrative expenses are excluded from subsidy cost estimates. Federal administrative expenses for the student loan programs have been accounted for in Education’s budget on a cash basis—showing how much money is allocated for administering all federal student aid programs in one fiscal year. The federal government is primarily responsible for administering the FDLP and, for the most part, Education has contracted with private-sector companies to perform administrative tasks, such as originating and servicing loans. In the FFELP, lenders and guaranty agencies perform administrative functions. In addition to the SAP paid to lenders to guarantee a minimum yield, which includes coverage of the administrative expenses incurred, Education pays guaranty agencies account maintenance fees for their administrative costs. In fiscal year 2006, Education requested $939 million for administrative expenses for all federal student loan and grant aid programs. Of this amount, $238 million was for FFELP administrative expenses and $388 million was for FDLP administrative expenses. When FCRA was first passed there were concerns about whether agencies could change existing accounting systems to estimate long term administrative expenses for a loan program. Over the last few years, Education’s Office of Federal Student Aid has been developing a system that allocates its administrative expenses to each student aid program in a particular fiscal year so that management would have information that could be used for decision making purposes. While developing the system, Education officials reported that some administrative expenses are clearly linked to either FFELP or FDLP—such as payments to originate or service FDLP loans, and servicing defaulted FFELP loans. However, other administrative expenses are incurred by both loan programs, such as information systems used to process financial aid applications, thus requiring Education to develop a systematic way to allocate such expenses to FFELP or FDLP. In the fiscal year 2006 budget, Education included, as supplementary information, modified cost estimates that included estimated administrative expenses. As shown in table 5, if administrative expenses are included, subsidy cost estimates for loans disbursed in fiscal year 2006 would increase by $1.45 per $100 disbursed in FDLP and by $0.69 per $100 disbursed in FFELP. To produce cost estimates that included administrative expenses, Education not only needed to know how much of an expense was allocated to FDLP or FFELP, but also had to project how such costs might change in the future and whether an expense was paid now or later. For example, servicing costs for an FDLP loan while the borrower is in-school are paid in the first years that a loan is disbursed and are lower than the same costs when a borrower is in repayment that are typically paid several years later. According to Education, determining the timing of the expense was important because expenses in later years were discounted and, therefore, cost less in present value terms than those made in the first year. Moreover, Education officials acknowledged that there are limitations with these estimates because they assumed that administration of student aid programs would remain the same in the future. They reported that there is the possibility that administration processes and functions will change based on legislative or technological changes, but it was not possible to develop assumptions that could be used in estimating the effects of any such changes. While current subsidy cost estimates account for some risks— uncertainties regarding future cash flows—they do not include all risks incurred when lending money over time. Among the risks borne by any lender are credit risk—the possibility that the loan will not be fully repaid—and interest rate risk—unanticipated fluctuations in the interest rate due to changes in the economy that cause changes in the present value of the loans’ cash flows. Some studies have commented that by not incorporating all risks in subsidy cost estimates, the government does not present an accurate picture of the costs of its credit programs, including both FFELP and FDLP. Risk can be reflected in subsidy cost estimates in different ways. For example, one way is to incorporate it in estimates of cash flows, and another way is to adjust the discount rate to reflect the risk. Currently, Education incorporates some risks into its FFELP and FDLP subsidy cost estimate model by explicitly adjusting cash flow estimates. For example, credit risk is explicitly incorporated into Education’s subsidy cost model. Cash flow estimates are adjusted to reflect the likelihood that borrowers will default on their loans based primarily on the type of school a borrower attends (e.g., 2-year college, graduate school, etc.). Interest rate risk, however, is not explicitly incorporated into Education’s model. Interest rate fluctuations can affect estimates of SAP and borrower interest payments as well as borrower behavior with respect to loan prepayment and consolidation. Although Education uses estimated prepayment rates in adjusting estimated FFELP and FDLP cash flows, these estimates are based on historical averages rather than an econometric forecast of how interest rates might fluctuate in the future and, thereby, influence borrowers’ decisions to prepay or consolidate their loans. Relying on historical averages—especially if such averages do not reflect a variety of interest rate environments and stable loan terms and borrower characteristics—may not reflect the tendency for prepayments to increase or decrease at times when it is advantageous for borrowers. CBO and others have suggested that, rather than adjusting cash flows, the discount rate could be changed to incorporate certain types of risk, such as interest rate risk, in estimating subsidy costs of federal credit programs. Currently, subsidy cost estimates calculate the net present value of the loans using the “risk-free” discount rate determined by OMB in accordance with FCRA, which reflects the government’s cost of borrowing funds. The rate is known as risk-free because an investor buying a U.S. Treasury instrument knows with certainty what cash flows will be received and when they will be received and there is assumed to be no probability of default on the investment. This risk-free discount rate tends to be relatively low compared to interest rates used to discount cash flows in private industry, where interest rates reflect the market’s valuation of transactions and incorporate considerations of various types of risk. In a 2004 report, CBO proposed, among other methods, using a risk-adjusted discount rate, rather than the risk-free rate, to estimate subsidy costs of federal credit programs. In the case of federal student loans, one way to calculate a risk-adjusted discount rate would be to evaluate the secondary market for student loans, where student loans are often sold to banks or other investors. However, there are limitations to this approach given numerous differences in private-sector versus public sector assessments of risk. Notwithstanding this, the market price of the student loans would reflect the market’s valuation of the loans, because the expected cash flows would have been discounted using a higher discount rate that incorporates risks—such as interest rate risk—that are not included in Education’s subsidy cost model. The present value (price) of loans being sold on the secondary market would tend to be lower than the government’s valuation of similar loans, i.e., loans with similar default risk, loan amount, time to repayment, and other factors. This difference in loan valuation could be helpful in determining a risk-adjusted discount rate to use in calculating the cost to the government, although determining an appropriate rate would be challenging. Incorporating interest rate risk would affect subsidy cost estimates for both credit programs, FFELP and FDLP. Modeling interest rate risk more systematically through the cash flow estimates would affect prepayment and interest payment projections under FDLP, as well as SAP projections and prepayment activities under FFELP. The extent to which subsidy cost estimates would change for FFELP and FDLP would depend on the interest rate scenarios forecasted and the subsequent effect on cash flows in each program. However, using a risk-adjusted discount rate would have a greater impact on the subsidy cost estimates of FDLP relative to FFELP. This difference would result, in part, because of differences in the amount and timing of cash flows: FDLP has large cash outlays early in a loan’s life and large cash inflows later, when loans are in repayment. Thus these late cash inflows would be discounted at a higher rate and would have a smaller present value than under the current discounting methodology. FFELP, on the other hand, generates some cash inflows to the government early while cash outflows occur later as loans default or when SAP payments, if any, are made. Both FFELP and FDLP generate federal tax revenues that are reflected in the revenue portion of the budget but are not included in subsidy cost calculations. Federal tax revenues are generated by a variety of sources, including private-sector lenders that account for a majority of the lenders that make or hold FFELP loans. Many of these lenders participate actively in the multi-billion dollar financial services industry of taxable and tax- exempt bonds, asset-backed securities, and other debt instruments and pay federal taxes on the income earned from these sources as well as from their student loan business. In addition, other private-sector companies that work with FFELP lenders and investors buying student loan bonds and securities also generate federal tax revenues from the income earned from their participation in FFELP. Moreover, to service and collect defaulted FFELP loans, Education contracts with private-sector companies that are another source of federal tax revenue. Although FDLP is financed and primarily administered by the federal government, Education contracts with private-sector companies for many key administrative tasks, such as servicing loans while borrowers are in school, repayment, or default. In fiscal year 2004 Education reported that it paid $321 million to private-sector contractors to service student loans and perform other administrative tasks in the FDLP. These private-sector contractors earn income from their participation in FDLP on which they may pay federal taxes. Another source of tax revenue is income tax paid by U.S. investors that hold Treasury securities used to finance FDLP loans. Estimating the dollar amount of federal tax revenues generated by private sector entities and investors in FFELP and FDLP would be challenging. For example, many lenders are large publicly traded financial services companies with student loans being one portion of their business, making it difficult to identify the tax revenue generated from their student loan business. Moreover, to make an estimate of tax revenues would require knowledge of each lender’s profits from its student loan business and applicable tax rates. Significant reestimates of subsidy costs over the past 10 years illustrate the challenges of estimating the lifetime costs of loans. As we have shown, subsidy cost estimates and reestimates are sensitive to the assumptions used in estimating these costs. The historically low interest rates that persisted over the last several years were below levels previously forecasted. Because cost estimates for FFELP and especially for FDLP loans are sensitive to changes between projected and actual interest rates, subsidy cost reestimates varied from original estimates. To the extent that current assumptions correctly predict future loan performance and interest rates, subsidy costs per $100 of FFELP loans made from fiscal years 1994 to 2004 will be, in many cases, less costly than originally anticipated. On the other hand, over the same time period, subsidy costs per $100 of FDLP loans will in many cases be higher than originally anticipated. FDLP subsidy costs per $100 of loans disbursed have, in general, remained lower than those of FFELP. Nonetheless, if current assumptions correctly predict future loan performance and economic conditions, the originally estimated gain to the government from FDLP loans made in fiscal years 1994 to 2004 will not materialize, and instead these loans will result in a net cost to the government. In reality, however, subsidy cost estimates of FFELP and FDLP loans made in fiscal years 1994 to 2004 will continue to change as future reestimates incorporate actual experience and new interest rate forecasts. Similarly, initial subsidy cost estimates for loans made in the future will also change over the life of these loans and at times be lower or higher than initially estimated, depending on the extent to which loan performance and interest rates differ from assumptions used to develop initial estimates. Actual subsidy costs for a cohort of student loans will remain unknown until all payments that will be made on such loans have been collected. Despite the fact that subsidy cost estimates will change from year to year, estimates developed in accordance with FCRA more fully and accurately present the expected long-term costs of federal student loans than did the prior method of calculating costs based on single-year cash flows to and from the government. As a result of FCRA, the budget is a more useful tool for allocating resources among the myriad of competing demands for federal dollars than it once was. Subsidy cost estimates, for example, provide policymakers the means to more accurately evaluate the long-term budgetary implications of potential legislative, regulatory, and administrative reforms. At the same time, it is important for policymakers to understand how credit reform subsidy cost estimates are developed and to recognize that such estimates will change in the future. Decisions made in the short-term on the basis of these estimates can have long-term repercussions for the fiscal condition of the nation. While subsidy cost estimates include many of the federal costs associated with FFELP and FDLP loans, they do not capture all federal costs and revenues associated with the loan programs. Consideration of all federal costs and revenues of the loan programs would be an important component of a broader assessment of the costs and benefits of the two programs. Because federal administrative expenses—in accordance with FCRA—are excluded from subsidy cost estimates, for example, these estimates can underestimate the total lifetime costs of FFELP and FDLP loans. Other costs and revenues are also not considered in subsidy costs estimates, including interest rate risk inherent to lending programs, and federal tax revenues generated by private-sector activity in both FFELP and FDLP. Calculations of total federal costs would be enhanced were these additional costs and revenues considered, though doing so may require complex methodologies and/or data that are not currently readily available. We provided Education with a copy of our draft report for review and comment. Education reviewed the report and had no comments. Education noted that because the report did not include recommendations for the Department, it was not providing a formal response to be included in the report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time we will send copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to the report are listed in appendix II. Appendix I: Comparison of Fiscal Year 2006 FDLP and FFELP Reestimated Subsidy Costs per $100 Disbursed, by Loan Type and Cohort Subsidy cost per $100 disbursed (nominal dollars) The following individuals made important contributions to the report: Jeff Appel, Assistant Director; Andrea Sykes, Analyst-in-Charge, Nagla’a El-Hodiri, Jeffrey W. Weinstein, Christine Bonham, Marcia Carlsen, Austin Kelly, Mitch Rachlis and Lauren Kennedy. | In fiscal year 2004, the federal government made or guaranteed about $84 billion in loans for postsecondary education through two loan programs--the Federal Family Education Loan Progam (FFELP) and the Federal Direct Loan Program (FDLP). Under FFELP, private lenders fund the loans and the government guarantees them a minimum yield and repayment if borrowers default. When the interest rate paid by borrowers is lower than the guaranteed minimum yield, the government pays lenders special allowance payments (SAP). Under FDLP, the U.S. Treasury funds the loans that are originated through participating schools. Under the Federal Credit Reform Act (FCRA) of 1990 the government calculates, for purposes of the budget, the net cost of extending or guaranteeing credit over the life of a loan, called a subsidy cost. Agencies generally update, or reestimate, subsidy costs annually to include actual program results and adjust future program estimates. GAO examined (1) whether reestimated subsidy costs have differed from original estimates for FFELP and FDLP loans disbursed in fiscal years 1994 through 2004, (2) what factors explain changes between reestimated and original subsidy rates--that is subsidy cost estimates per $100 disbursed; and (3) which federal costs and revenues associated with the student loan programs are not included in subsidy cost estimates. Both FFELP and FDLP subsidy cost reestimates have differed from original estimates for loans made in fiscal years 1994 through 2004, reflecting the challenges inherent in estimating the actual costs of loans made under each of these federal loan programs. Reestimated subsidy costs for FFELP loans were close to or lower than original estimates for loans made in fiscal years 1994 to 2002, but higher than originally estimated for loans made in fiscal years 2003 and 2004. FDLP reestimated subsidy costs were generally similar to or higher than originally estimated for loans made in fiscal years 1994 through 2004. Differences between original and reestimated subsidy cost estimates per $100 disbursed were, in part, due to market interest rates that were lower than originally forecasted, greater than anticipated loan consolidation, and the availability of additional data on student loans. Each of these factors has affected reestimated subsidy costs for each loan program in a different way. For example, interest rates fell to lower than expected levels in 2001 and the condition persisted through 2004. For FFELP, lower than expected interest rates have made the difference between the borrower interest rate and lender yield smaller than expected resulting in lower SAP paid to lenders, which in turn resulted in lower reestimated subsidy cost estimates. For FDLP, lower than expected interest rates contributed to higher reestimated subsidy costs because the government received smaller interest payments from borrowers than originally anticipated and, in some cases, the rate paid by student borrowers fell below the government's fixed borrowing rate. Certain federal costs and revenues associated with the student loan programs, such as federal administrative expenses, some costs of risk associated with lending money over time, and federal tax revenues generated by both student loan programs, are not included in subsidy cost estimates. For example, under current law, federal administrative expenses are excluded from subsidy cost estimates. Moreover, both loan programs generate federal tax revenues from private sector companies and investors that are encompassed in the revenue portion of the budget but are not included in subsidy cost calculations. Estimating the amount of federal tax revenues generated by the loan programs would be difficult and was beyond the scope of our review. Education reviewed a draft copy of this report and did not have any comments. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Medicare covered approximately 54 million beneficiaries in fiscal year 2014 at an estimated cost of $603 billion. The program consists of four parts, Parts A through D. In general, Part A covers hospital and other inpatient stays, and Part B covers hospital outpatient and physician services, durable medical equipment, and other services. Together, Parts A and B are known as traditional Medicare or Medicare fee-for- service. Part C is Medicare Advantage, under which beneficiaries receive their Medicare health benefits through private health plans, and Part D is the Medicare outpatient prescription drug benefit, which is administered through private drug plans. Medicare beneficiaries who enroll in Part C or Part D plans receive separate cards from those plans, in addition to their traditional Medicare card. Generally, an individual’s eligibility to participate in Medicare is initially determined by the Social Security Administration, based on factors such as age, work history, contributions made to the programs through payroll deductions, and disability. Once the Social Security Administration determines that an individual is eligible, it provides information about the individual to CMS, which prints and issues a paper Medicare card to the beneficiary. Providers must apply to enroll in Medicare to become eligible to bill for services or supplies provided to Medicare beneficiaries. CMS has enrollment standards and screening procedures in place that are designed to ensure that only qualified providers can enroll in the program and to prevent enrollment by entities that might attempt to defraud Medicare.providers bill Medicare by submitting claims for reimbursement for the services and supplies they provide to beneficiaries. Providers are not issued identification cards, but instead use an assigned unique provider Under Medicare fee-for-service, identification number—their National Provider Identifier (NPI) number—on each claim. Electronically readable cards could be implemented for a number of different purposes in Medicare. We identified three key proposed uses: Authenticating beneficiary and provider presence at the point of care. Beneficiary and provider cards could be used for authentication to potentially help limit certain types of Medicare fraud, as CMS could use records of the cards being swiped to verify that they were present at the point of care. Using electronically readable cards for authentication would not necessarily involve both beneficiaries and providers, as cards could be used solely to authenticate beneficiaries, or solely to authenticate providers. Electronically exchanging beneficiary medical information. Beneficiary cards could be used to store and exchange medical information, such as electronic health records, beneficiary medical conditions, and emergency care information, such as allergies. Provider cards could also be used as a means to authenticate providers accessing electronic health record (EHR) systems that store and electronically exchange beneficiary health information. Electronically conveying beneficiary identity and insurance information to providers. Beneficiary cards could be used to auto- populate beneficiary information into provider IT systems and to automatically retrieve existing beneficiary records from provider IT systems. For example, an electronically readable Medicare beneficiary card could contain the identity and insurance information printed on the current paper Medicare cards—beneficiary name, Medicare number, gender, Medicare benefits, and effective date of Medicare coverage. The primary purpose of this potential use would be to improve provider record keeping by allowing providers the option to capture beneficiary information electronically. The use of electronically readable cards for health care has been limited thus far in the United States. According to stakeholders, the limited use is due, in part, to reluctance among the insurance industry and health care providers to invest in a technology that would depend on a significant investment from both parties to implement. However, some health insurers, including a large insurer, have issued electronically readable cards to their beneficiaries, and some integrated health systems have issued cards to patients to help manage patient clinical and administrative information.cards have been used as health insurance cards for decades. For example, France and Germany have used smart cards in their health care systems since the 1990s. Appendix II includes additional details about France’s and Germany’s use of smart cards. Although there is no reliable measure of the extent of fraud in the Medicare program, for over two decades we have documented ways in which fraud contributes to Medicare’s fiscal problems. Preventing Medicare fraud and ensuring that payments for services and supplies are accurate can be complicated, especially since fraud can be difficult to detect because those involved are generally engaged in intentional deception. Common health care fraud schemes in Medicare include the following: Billing for services not rendered. This can include providers billing for services and supplies for beneficiaries who were never seen or rendered care, and billing for services not rendered to beneficiaries who are provided care (such as adding a service that was not provided to a claim for otherwise legitimately provided services). In some types of fraud schemes, individuals may steal a provider’s identity and submit claims for services never rendered and divert the reimbursements without the provider’s knowledge. Fraudulent or abusive billing practices. This can include providers billing Medicare more than once for the same service; inappropriately billing Medicare and another payer for the same service; upcoding of services; unbundling of services; billing for noncovered services as covered services; billing for medically unnecessary services; and billing for services that were performed by an unqualified individual, or misrepresenting the credentials of the person who provided the services. Kickbacks. This can include providers, provider associates, or beneficiaries knowingly and willfully offering, paying, soliciting, or receiving anything of value to induce or reward referrals or payments for services or goods under Medicare. Among other processes, to detect potential fraud, CMS employs IT systems—including its Fraud Prevention System—that analyze claims submitted over a period of time to detect patterns of suspicious billing.CMS and its contractors investigate providers and beneficiaries with suspicious billing and utilization patterns and, in suspected cases of fraud, can take administrative actions, such as suspending payments or revoking a provider’s billing privileges, or refer the investigation to the HHS Office of Inspector General for further examination and possible criminal or civil prosecution. As we have previously reported, there are three potential factors that can be used to authenticate an individual’s identity: (1) “something they possess,” such as a card, (2) “something they know,” such as a password or personal identification number (PIN), and (3) “something they are,” such as biometric information, for example, a fingerprint, or a picture ID.Generally, the more factors that are used to authenticate an individual’s identity, the higher the level of identity assurance. For example, a card used in conjunction with a PIN provides a higher level of identity authentication than just a card, since the PIN makes it more difficult for individuals who are not the cardholder to use a lost or stolen card. NIST has issued standards for federal agencies for using electronically readable cards to achieve a high level of authentication, and those standards require robust enrollment and card issuance processes to ensure that the cards are issued to the correct individuals. These processes include procedures to verify an individual’s identity prior to card issuance to ensure eligibility and to ensure that the cards are issued to the correct individual. For example, verifying an individual’s address is an important practice for issuing cards by mail. If a significant number of cards are issued to ineligible or incorrect individuals, it undermines the utility of the cards for identity authentication. Practices that provide higher levels of identity authentication generally are more expensive and difficult to implement and maintain and may cause greater inconvenience to users than practices that provide lower levels of assurance. The level of identity authentication that is appropriate for a given application or transaction depends on the risks associated with the application or transaction. The greater the determined risk, the greater the need for higher-level identity authentication practices. The Office of Management and Budget and NIST have issued guidance defining four levels of identity assurance ranging from level 1—“little or no confidence in the asserted identity’s validity”—to level four—“very high confidence in the asserted identity’s validity”—and directed agencies to use risk-based methods to decide which level of authentication is appropriate for any given application or transaction. Additionally, authentication practices should take into account issues related to cost and user acceptability. CMS currently relies on providers to authenticate the identities of Medicare beneficiaries to whom they are providing care, but the agency does not have a way to verify whether beneficiaries and providers were actually present at the point of care when processing claims. At this point, CMS has not made a determination that a higher level of beneficiary and provider authentication is needed. The type of electronically readable card most appropriate for Medicare would depend on how the cards would be used. Three common types of electronically readable cards that could be used to replace the current printed Medicare card are smart cards, magnetic stripe cards, and bar code cards. The key distinguishing feature of smart cards is that they contain a microprocessor chip that can both store and process data, much like a very basic computer. Based on our analysis of the capability of the three types of cards, we found that while all of the cards could be used for authentication, storing and exchanging medical information, and conveying beneficiary information, the ability of smart cards to process data enables them to provide higher levels of authentication and better secure information than cards with magnetic stripes and bar codes. Our analysis found that smart cards could provide substantially more rigorous authentication of the identities of Medicare beneficiaries and providers than magnetic stripe or bar code cards (see fig. 1). Although all three types of electronically readable cards could be used for authentication, smart cards provide a higher level of assurance in their authenticity because they are difficult to counterfeit or copy. Magnetic stripe and bar code cards, on the other hand, are easily counterfeited or For example, officials in France told us that they chose to use copied.smart cards as their health insurance cards, in part, because they were less susceptible to counterfeiting, and reported that they have not encountered any problems with counterfeit cards. Additionally, smart cards can be implemented with a public key infrastructure (PKI)—a system that uses encryption and decryption techniques to secure information and transactions—to authenticate the cards and ensure the data on the cards have not been altered. All three types of cards could be used in conjunction with other authentication factors, such as a PIN or biometric information, to achieve a higher level of authentication. However, only smart cards are capable of performing on-card verification of other authentication factors. For example, smart cards can verify whether a user provides a correct PIN or can confirm a fingerprint match, without being connected to a separate IT system. Cards with magnetic stripes and barcodes cannot perform such on-card verification, and require a connection to a separate IT system to verify PINs or biometric information. We also determined that using electronically readable cards to store and exchange medical information would likely require the use of smart cards given their storage capacity and security features. Smart cards have a significantly greater storage capacity than magnetic stripe and bar code cards, and would be able to store more extensive medical information on the cards. However, the storage on smart cards is limited, so it is unlikely that the cards would be able to store all of a beneficiary’s medical records or medical records of a larger file size, such as medical images. In addition, smart cards could better secure confidential information, including individually identifiable health information subject to protection under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). Smart cards can be implemented with PKI to perform public key encryption and authentication to secure and securely transmit any medical information on the card. Smart cards’ ability to perform on-card verification can also be used to limit access to information on the cards to better ensure that information is not accessed inappropriately. For example, beneficiaries could be required to enter a PIN for providers to access medical information on the card, while access to nonsensitive information could be allowed without beneficiaries entering a PIN. Our analysis also found that any of the three types of electronically readable cards could be used to convey beneficiary identity and insurance information to providers. Each type of card has adequate storage capacity to contain such information, and storing this type of information may not require cards with processing capabilities or security features. If beneficiary SSNs continue to serve as the main component of Medicare numbers, cards with security features would be needed to reduce the risk of identity theft. Using electronically readable cards to authenticate beneficiary and provider presence at the point of care could potentially curtail certain types of Medicare fraud, but would have limited effect since CMS has stated that it would continue to pay claims regardless of whether a card was used. Using electronically readable cards to store and exchange medical records is not part of current federal efforts to facilitate health information exchange and would likely present challenges. Using electronically readable cards to convey identity and insurance information to auto-populate and retrieve information from provider IT systems could reduce errors in the reimbursement process and improve medical record keeping. Using electronically readable cards to authenticate beneficiary and provider presence at the point of care could potentially limit certain types of Medicare fraud. However, we could not determine the extent to which authenticating beneficiaries and providers at the point of care could limit fraud because there is no reliable estimate of the extent or total dollar value associated with specific types of Medicare fraud schemes. Stakeholders told us that authenticating beneficiaries at the point of care could potentially limit schemes in which Medicare providers misuse beneficiary Medicare numbers to bill fraudulently for services. In such schemes, providers use beneficiary Medicare numbers to bill on their behalf without having ever seen or rendered care to the beneficiaries. As of May 2014, CMS was aware of 284,000 Medicare beneficiary numbers that had been compromised and potentially used to submit fraudulent claims. Stakeholders also told us that authenticating providers at the point of care could potentially limit fraud schemes in which individuals or companies misuse an unknowing provider’s Medicare enrollment information to submit claims and divert stolen reimbursements. Adding another authentication factor, such as a PIN or a biometric factor, to a beneficiary’s card also could limit the potential for individuals to use a stolen Medicare card to obtain care or bill for services. For example, individuals attempting to use a stolen card could not pose as a beneficiary or bill for services on behalf of a beneficiary without knowing the beneficiary’s PIN. Beneficiaries would still be able to lend their card to others and tell them their PIN, though replicating a biometric factor would be more difficult. beneficiaries that were never seen or rendered care: Two owners of a home health agency paid kickbacks to obtain information on Medicare beneficiaries and used the information to bill for home health care services that were not actually rendered. services: A doctor performing surgeries on beneficiaries billed Medicare for individual steps involved in the surgeries, rather than the entire procedure to fraudulently increase reimbursements. services as covered services: The owner of a medical transport company provided beneficiaries with routine, nonemergency transportation services not covered by Medicare, but billed Medicare for emergency ambulance transportation, which is covered by Medicare. told us that requiring cards to be used would not be feasible because of concerns that doing so would limit beneficiaries’ access to care. Specifically, CMS officials told us the agency would not want to make access to Medicare benefits dependent on beneficiaries having their card at the point of care. According to CMS officials and stakeholders, there are legitimate reasons why a card may not be present at the point of care, such as when beneficiaries or providers forget their cards or during a medical emergency. Because CMS has indicated that it would still process and pay for these claims, providers submitting potentially fraudulent claims could simply not use the cards at the point of care. Some stakeholders noted that CMS could mitigate the risk of paying claims in which cards are not used by using its Fraud Prevention System or other IT systems to identify and investigate providers with suspicious billing patterns related to card use. For example, such systems could identify providers that submit an abnormally high percentage of claims in which cards are not used, which could be indicative of claims for beneficiaries who were never seen or rendered care. However, CMS officials noted that they already use their IT systems to identify providers that bill for services for beneficiaries who were never seen or rendered care. For example, CMS analyzes billing patterns to identify and conduct postpayment investigations into providers that submit an abnormal number of claims for beneficiaries with known compromised numbers. Provider paid or received kickbacks for beneficiary referrals for specific services, or for the purchase of goods or services that may be paid for by Medicare: The operator of a home health agency paid illegal kickbacks to physicians to refer beneficiaries who were not homebound or who otherwise did not qualify for home health services, resulting in fraudulent Medicare billing for home health services. kickbacks to allow provider to fraudulently bill for services: Two beneficiaries solicited and received kickbacks to serve as patients for a home health agency that fraudulently billed Medicare for physical therapy services. According to stakeholders, the use of electronically readable beneficiary cards would also have little effect on many other potentially fraudulent and abusive provider billing practices. For example, use of the cards would not prevent providers from mischaracterizing services, billing for medically unnecessary services, or adding a service that was not provided to a claim for otherwise legitimate services because such fraud does not involve issues related to authentication. Instead, these types of fraud typically involve providers that wrongly bill Medicare for the care provided, or misrepresent the level or nature of the care provided. The use of electronically readable beneficiary and provider cards would also have little effect on preventing fraud that involves collusion between providers and beneficiaries because complicit beneficiaries, including those who receive kickbacks, would likely allow their cards to be misused. Officials we spoke with in France and Germany told us that the use of electronically readable cards has not limited certain types of fraud. Officials from provider organizations and an insurance organization in Germany told us that the use of beneficiary cards does not prevent providers from fraudulently adding services that they never provided onto otherwise legitimate claims. In addition, officials from France noted that certain elderly or infirm beneficiaries may need to rely on providers to maintain custody of and use their cards, and there had been instances of providers and caretakers misusing beneficiary cards in such cases. For example, officials from an insurance organization in France noted that nurses and caretakers of elderly patients have stolen patient cards and allowed other providers to misuse them. Finally, there are also concerns that the use of an electronically readable card could introduce new types of fraud and ways for individuals to illegally access Medicare beneficiary data. For example, CMS officials said that malicious software written onto an electronically readable card could be used to compromise provider IT systems. In addition, CMS officials noted that individuals could illicitly access beneficiary information through “card skimming.” However, Medicare beneficiary data in provider IT systems may already be vulnerable to illegal access and use. Using electronically readable cards to store and exchange beneficiary medical information is not part of current federal efforts to facilitate electronic health information exchange and would likely present challenges. To help improve health care quality, efficiency, and patient safety, the Medicare EHR Incentive Program provides financial incentives for Medicare providers to increase the use of EHR technology to, among other things, exchange patient medical information electronically with other providers. In addition, ONC has funded health information exchange organizations that provide support to facilitate the electronic exchange of health information between providers. These and other ongoing federal health information exchange programs aim to increase the connections and exchanges of medical information directly between provider EHR systems so that patient medical information is available where and when it is needed. None of these existing programs include the use of electronically readable cards to store or exchange medical information. Using electronically readable cards to store and exchange beneficiary medical information would introduce an additional medium to supplement health information exchange among EHR systems, with beneficiaries serving as intermediaries in the exchange. Stakeholders noted that implementing another medium, such as a card, that stores beneficiary medical information outside of provider EHR systems could lead to inconsistencies with provider records. Stakeholders, including a health care IT vendor and a provider organization, stated that storing beneficiary medical information on beneficiary cards in addition to EHR systems could lead to problems with ensuring that medical information is synchronized and current. For example, beneficiaries who have laboratory tests performed after medical encounters would not have a means to upload the results to their cards before visiting their providers again, leading to cards that are not synchronized with provider records. Several stakeholders also stated that using electronically readable cards to store and exchange medical information would likely face similar interoperability issues encountered by federal health exchange programs and providers implementing EHR systems. Information that is electronically exchanged among providers must adhere to the same standards in order to be interpreted and used in EHRs. We previously found that insufficient standards for electronic health information exchange have been cited by providers and other stakeholders as a key challenge for health information exchange. For example, we found that insufficient standards for classifying and coding patient allergy information in EHRs could potentially limit providers’ ability to exchange and use such information. The use of electronically readable cards would involve exchanging medical information through an additional medium, but it would also be subject to the same interoperability issues that currently limit exchange. Despite potential challenges using electronically readable cards to store and exchange medical information, several stakeholders noted that adding patient health information to an electronically readable card may have benefits such as better health outcomes in emergency medical situations. For example, a beneficiary card containing medical information could be used by an emergency care provider to access important information that might otherwise be unknown, such as beneficiary allergy information. One potential benefit of electronically readable provider cards is that they could provide an option to authenticate providers accessing EHR systems, especially for remote online access. EHR systems that store patient medical information can be accessed from places outside the clinical setting, and there are concerns regarding the current level of identity authentication to ensure that only authorized providers are accessing the systems remotely. Although no determinations have been made regarding what specific authentication practices are needed, or what types of technology should be used for remote access, an HHS advisory committee has recommended that the Medicare EHR program implement rules regarding how providers should be authenticated when remotely accessing EHR systems. According to an electronically readable card industry organization, electronically readable cards could be used to authenticate providers remotely accessing EHR systems. Using electronically readable cards to convey identity and insurance information to auto-populate and retrieve information from provider IT systems could reduce errors in the reimbursement process and improve medical record keeping and health information exchange. Many providers currently capture identity and insurance information by photocopying insurance cards and manually entering beneficiary information into their IT systems, which can lead to data entry errors. In addition, providers have different practices for entering beneficiary names, such as different practices for recording names with apostrophes and hyphens, or may use beneficiary nicknames, leading to possible naming inconsistencies for a single individual. The failure to initially collect accurate beneficiary identity and insurance information when providers enter patient information into their IT systems, or retrieve information on existing beneficiaries, can compromise subsequent administrative processes. According to stakeholders, using an electronically readable card to standardize the process of collecting beneficiary identity and insurance information could help reduce errors in the reimbursement process. When beneficiaries’ identity or insurance information is inaccurate, insurers reject claims for those beneficiaries. Providers then must determine why the claims have been rejected, and reimbursements are delayed until issues with the claims are addressed and the claims are resubmitted. Once any issues are addressed, insurers reprocess resubmitted claims. Based on data provided by CMS, we found that up to 44 percent of the more than 70 million Medicare claims that CMS rejected between January 1, 2014, and September 29, 2014, may have been rejected because of invalid or incorrect beneficiary identity and insurance information that could be obtained from beneficiaries’ Medicare cards. In addition, HHS has cited an industry study indicating that, industrywide, a significant percentage of denied health insurance claims are due to providers submitting incorrect patient information to insurers. However, CMS officials stated that using electronically readable cards may not necessarily reduce claim rejections because providers may still obtain beneficiary information in other ways, including over the telephone or paper forms that have been filled out by beneficiaries. Stakeholders also told us that problems with collecting beneficiary information can lead to the creation of medical records that are not linked accurately to beneficiaries or records that are linked to the wrong individual, which can lead to clinical inefficiencies and potentially compromise patient safety. For example, problems collecting beneficiary information can prevent providers from retrieving existing beneficiary records from their IT systems, leading providers to create duplicate medical record files that are not matched to existing beneficiary records. Medical records that are not accurately linked to beneficiaries can compromise a provider’s ability to make clinical decisions based on complete and accurate medical records, which can lead to repeat and unnecessary medical tests and services, and adverse events, such as adverse drug interactions. Furthermore, inaccurate and inconsistent beneficiary records can also limit electronic health information exchange by limiting the ability to match records among providers. We previously found that difficulty matching beneficiaries to their health records has been a key challenge for electronic health information exchange, and this can lead to beneficiaries being matched to the wrong set of records, and to providers needing to match records manually. VA also recently issued new paper cards to certain veterans to obtain care outside of VA facilities. See the Veterans Access, Choice and Accountability Act of 2014, Pub. L. No. 113-146, § 101(f),128 Stat. 1754, 1760 (codified at 38 U.S.C. § 1701 note). beneficiary identity and insurance information prior to appointments, through either telephone conversations or online portals to preregister for appointments. This practice of ensuring the accuracy of beneficiary information prior to appointments may limit the possible benefits of using electronically readable cards to convey information at the point of care. CMS would need to update its claims processing systems to use electronically readable cards to authenticate beneficiary and provider presence at the point of care, while using the cards to convey beneficiary identity and insurance information might not require CMS to make IT updates. Similarly, using electronically readable cards for authentication would require updates to CMS’s current card management processes, while using the cards to convey beneficiary identity and insurance might not. For all potential uses of electronically readable cards, Medicare providers could incur costs and face challenges updating their IT systems to read and use information from the cards. Using electronically readable cards to authenticate beneficiaries and providers would require updates to CMS’s claims processing systems to verify that the cards were swiped at the point of care. CMS officials told us they have not fully studied the specific IT updates that would be needed to the claims processing system and could not provide an estimate of costs associated with implementing any updates. However, they noted that any IT updates would necessitate additional funding and time to implement, and could involve IT challenges. Based on our research, we identified two options for how CMS could verify that the cards were swiped by beneficiaries and providers at the point of care. The first option is based on proposals from an HHS advisory organization and a smart card industry organization. When beneficiaries and providers swipe their cards, CMS’s IT systems would generate and transmit unique transaction codes to providers. Providers would include the transaction codes on their claims. When processing claims, CMS would match the original transaction codes generated by CMS’s IT systems with the codes on submitted claims.For this option, CMS officials told us that they would need to implement an IT system to collect and store data on the transaction codes and build electronic connections with existing claims processing systems to match the codes with submitted claims. The second option is based on the processes used in a CMS pilot program. When beneficiaries and providers swipe their cards, information about the card transaction—such as the date of the transaction and the beneficiary Medicare number and provider NPI associated with the cards—would be sent to CMS. CMS would match this information about the card transaction with information on the claims submitted by the providers. According to officials, this option would similarly involve implementing an IT system to collect and store data on the card transactions and connecting the system with existing claims processing systems to match information about the transactions with submitted claims. CMS officials told us that verifying that beneficiary and provider cards were swiped by including new content on claims—such as unique transaction codes—would be problematic. Doing so would involve changes to industrywide standards for claim submission and the way in which CMS’s IT systems receive submitted claims. These industrywide standards govern the data content and format for electronic health care transactions, including claim submission. Adding new content to claims, such as a field for a transaction code, would require CMS to seek changes to existing claim standards with the standard-setting body responsible for overseeing the data content and format for electronic health care transactions. Officials told us that requesting and having such changes approved could take several years. CMS officials further noted that the IT infrastructure that CMS developed to accept electronic claim submissions was built to accept claims based on current standards and would need to be updated to accept any new content fields. However, under the second option, verifying that the cards were swiped by matching information about the card transaction—such as the date and beneficiary and provider identification information—with information on the claims submitted would not involve additional content on claims because CMS would be matching the card transactions with information currently included on claims. See GAO, Information Security: Advances and Remaining Challenges to Adoption of Public Key Infrastructure Technology, GAO-01-277 (Washington, D.C.: Feb. 26, 2001). federal agencies, told us that CMS could leverage such services to use PKI for electronically readable Medicare cards. CMS officials stated that CMS has not studied this issue and said they could not provide any cost estimates for using PKI for electronically readable Medicare cards. In contrast to using electronically readable cards for authentication, using the cards to convey beneficiary identity and insurance information may not require updates to CMS’s IT systems. Using the cards to convey such information primarily involves transferring information from the card to provider IT systems, as opposed to interacting with CMS IT systems. However, CMS officials said if any additional identity or insurance information is put on an electronically readable card that requires changes to the content or formatting of claims, CMS would have to update its claims processing systems. CMS would need to update and obtain additional resources for its current card management processes to use electronically readable cards to achieve a higher level of authentication for beneficiaries and providers. Card management processes involve procedures for enrollment, issuing cards, replacing cards, updating information on cards, deactivating cards, and addressing cardholder issues, among other processes; and developing standards and procedures for card use. Medicare currently does not issue cards to providers, and therefore CMS would need to implement a new program to issue and manage provider cards and to develop standards and procedures for card use. In addition, we found that new standards and procedures for card use would likely need to be developed to implement electronically readable cards to authenticate beneficiaries and providers. Proponents have suggested that NIST standards for electronically readable cards could be used to implement such cards for Medicare. However, these standards generally apply to the issuance and use of smart cards by federal employees and contractors for accessing computers and physical locations, and we found that the application of such standards could present logistical challenges for Medicare and could entail changes to current Medicare card management practices. For example, NIST standards involve procedures for verifying the identities of individuals before they are issued cards and, among other requirements, require potential cardholders to appear in person before being issued a card. Medicare does not require beneficiaries to appear in person to be enrolled in the program and issued cards. Doing so could present barriers to beneficiary enrollment and could present logistical challenges, given that Medicare covered approximately 54 million beneficiaries in 2014 and CMS does not have an infrastructure in place to meet beneficiaries in person. Additionally, to use the cards with a PKI system, CMS would need to implement processes to update and reissue beneficiary cards as needed to meet security requirements. Currently, the NIST standards require cards to be reissued every 6 years to update the PKI keys on the cards. Reissuing cards on a regular basis would likely require the implementation of new card management processes and additional resources for CMS. As of now, CMS only reissues cards if they are reported as lost, stolen, or damaged, or if there is a change to beneficiary information, such as a name change. CMS would face additional card management challenges and practical concerns to use electronically readable cards in conjunction with a PIN or biometric information. According to CMS officials, implementing PINs or biometrics would come with large costs and would involve significant changes for CMS and beneficiaries. To use PINs, CMS would need to implement processes for creating, managing, and verifying them. CMS officials and other stakeholders also noted that certain Medicare beneficiaries, especially those with cognitive impairments, may not be able to remember their PINs. Officials we spoke with in France told us that they decided not to have beneficiaries use PINs with their cards after a pilot project found that some beneficiaries had difficulties remembering them. In terms of using biometrics, CMS officials and other stakeholders expressed concerns regarding beneficiaries’ willingness to provide biometric information due to privacy considerations and the logistics involved in collecting such information from beneficiaries. Both France and Germany are currently issuing cards that include photographs of beneficiaries, and officials from both countries told us that they experienced difficulties collecting them. Both countries allow beneficiaries to submit their photographs by mail, and Germany allows beneficiaries to submit their photographs online.because the pictures are not taken in person, there are few controls in place to ensure that beneficiaries submit a representative photograph of themselves. VA includes a photograph of the veteran on its cards, which it generally obtains in person at local medical centers. CMS does not have an infrastructure like VA to take photographs of Medicare beneficiaries. Officials from Germany noted that CMS would need to implement processes for securing information on electronically readable cards to use them to store and exchange beneficiary medical information. CMS and ONC officials and other stakeholders expressed concerns about storing individually identifiable health information on the cards and told us that beneficiaries would likely be sensitive to having their medical information on the cards, so the security processes in place to protect this information would need to be rigorous. In particular, processes would be needed for accessing and writing information onto the cards to ensure that beneficiaries could control who could view stored information and to ensure that only legitimate providers are able to access information from or write information onto the cards. In contrast with using electronically readable cards for authentication or to store and exchange beneficiary medical information, we found that CMS would not necessarily need to make changes to current standards and procedures for the cards to electronically convey beneficiary identity and insurance information. The cards would not be used in a significantly different way than they are now—to convey information that providers use to verify beneficiary eligibility and to submit claims—and accordingly, little would change other than the type of card CMS issues. Instead of a paper card, CMS would need to produce and issue an electronically readable card.in the United States has been limited, there are existing industry standards for using such cards to convey identity and insurance information. An HHS advisory organization, the Workgroup for Electronic Data Interchange (WEDI), has issued formatting and terminology Although the use of electronically readable health insurance cards standards for using electronically readable cards that could be applied to electronically readable Medicare cards. CMS officials also noted that the implementation of electronically readable cards would require beneficiary and provider education and outreach regarding the new cards and any associated changes related to how the cards are used. For example, CMS would have to disseminate information on the different functions and features of any card and information on what to do if the electronically readable functions of the card are not working. For cases where IT systems malfunctioned or IT access was an issue, CMS officials stated the agency would need to have support services in place for providers and beneficiaries, and paper back- up options. For all potential uses of electronically readable cards, Medicare providers could incur costs and face challenges updating their IT systems to read and use information from the cards. For providers to use electronically readable cards, they would need to have hardware, such as card readers, to read information from the cards. According to stakeholders, including provider organizations, health care IT, transaction standards, billing, and management organizations, and health care IT vendors, in general, providers would also need to update their existing IT system software to use the information on cards. For example, to use electronically readable cards to store and exchange beneficiary medical information, providers’ EHR systems would need to be updated to be able to read and use the medical information on the cards. Generally, providers would have to update their existing IT systems with a type of software called middleware to interact with and use information from electronically readable cards, and such updates could involve significant challenges.provider IT systems, including billing systems and EHRs, vary widely and often are customized to meet the needs of individual providers. While some providers have a single, integrated IT system for billing, tracking According to stakeholders we spoke with, patient medical information, and other administrative applications, other providers have individual systems for each application, such as practice management, billing, and EHR systems. Because of the variety and customization of systems in place, providers may need to implement uniquely developed middleware for each software system the cards would interact with to ensure that their IT systems could read and use information from the cards. Updating provider IT systems to use electronically readable cards for beneficiary and provider authentication by including transaction codes on claims could prove particularly challenging. To do so, the cards would need to be able to interact with provider IT systems used for billing so that the systems could incorporate the transaction codes generated by the cards onto provider claim forms. Stakeholders told us that current provider IT systems are not designed to interact with electronically readable cards to incorporate transaction codes generated by the cards onto claims. Additionally, they said that provider billing practices vary widely, which presents challenges for developing standard ways to update provider IT systems to be able to perform this function. For example, some providers have IT systems capable of directly billing CMS, while others use IT systems that electronically transmit clinical encounter information to third-party billers, who generate and submit claims to CMS. Some providers do not use IT systems, and submit paper claims or clinical encounter information to clearinghouses, which convert the claims into electronic format and submit them to CMS. If information about the card transaction is sent directly to CMS—and no transaction codes are included on claims—providers would not necessarily need to update their existing IT software. In CMS’s 2011 and 2012 electronically readable card pilot program, participating physicians and suppliers did not need to update their IT systems, as they used magnetic stripe cards and sent the information to CMS using existing credit card readers and networks. However, if CMS used smart cards with PKI for authentication rather than magnetic stripe cards and credit card readers, providers would likely need to purchase card readers and software capable of authenticating the cards. While some provider IT systems would need to be updated with middleware to be able to use beneficiary identity and insurance information conveyed by electronically readable cards, some provider systems already have this capability. One vendor noted that its IT systems are capable of using beneficiary identity and insurance information from cards that comply with WEDI electronically readable card standards to auto-populate and retrieve information from their IT systems. In addition, an insurer that issues electronically readable cards that comply with WEDI standards told us that there are providers that currently use its cards to auto-populate information into their IT systems, though this insurer could not estimate the percentage of providers who do so. In addition to updating IT systems, CMS officials and stakeholders also expressed concerns regarding how using electronically readable cards to authenticate providers at the point of care would be incorporated into provider workflows. During the pilot program conducted by CMS, participating providers told CMS that using the cards was an administrative burden that required changes to their workflows. Stakeholders noted that it might not be practical for providers to swipe the cards during the course of providing care and that the cards might instead be used by administrative or billing staff. However, having administrative staff use provider cards could create complexity in terms of card use and limits the ability of the card to be used to authenticate provider presence at the point of care. For some providers, administrative and billing processes might not take place at the same location where care is provided. Stakeholders also expressed logistical concerns regarding when and how beneficiary and provider cards would be swiped at the point of care. At larger provider facilities, such as hospitals, having beneficiaries and providers swipe their cards at the point of care might require providers to deploy many card readers within a single facility. Additionally, stakeholders expressed concerns regarding how the cards would be used when multiple providers provide care during a single medical encounter. For example, a beneficiary experiencing a medical emergency may be provided care by an ambulance company, hospital, and attending physicians. With each provider submitting its own claim for reimbursement, it raises questions regarding how a single swipe of the beneficiary’s card would be matched to each of the claims submitted by the providers. Further, stakeholders raised questions regarding how the cards would be used by providers that may have little contact with beneficiaries, such as laboratories. Many stakeholders also cited potential challenges encouraging providers to incur costs to purchase hardware and update their IT systems to use the cards, especially given existing CMS IT requirements. Officials at CMS and ONC, along with stakeholders, noted that Medicare providers are already investing resources, and facing IT challenges, to meet Medicare EHR Incentive Program requirements and to update their IT Both France and Germany have systems to adopt new billing codes.experienced similar challenges with provider reluctance to incur costs to use electronically readable cards. According to officials from organizations we spoke with in those countries, financial subsidies to purchase hardware and update IT systems, and financial incentives for card use have been key to encouraging provider participation. France and Germany have each successfully implemented an electronically readable card system—specifically, a smart card system— on a national scale in their health care systems. The implementation of these systems provides lessons that could inform U.S. policymakers in deciding whether to adopt an electronically readable card for Medicare. Both countries’ experiences demonstrate that implementation of an electronically readable card would likely be a long process and would require that competing stakeholder needs be discussed and addressed. Further, the experiences of France and Germany illustrate that after implementation, management of an electronically readable card system is a continuing and costly process. France and Germany’s successful implementation of an electronically readable card system demonstrates that implementation of such a system on a national scale is possible. According to the organization that manages the smart card system in France, 50 million citizens, or about 76 percent of the population in France, used a beneficiary card and more than 300,000 health care providers used a health care provider card as part of a health care service in 2013. Approximately 90 percent of France’s health care claims were generated by swiping both a beneficiary and a health care provider smart card. In Germany, approximately 70 million citizens, or about 85 percent of the population, used a smart card provided to beneficiaries as their health insurance card in 2014, according to government officials. The experiences of both countries also demonstrate that the implementation of an electronically readable card system can be a long process. France has had a smart card system for beneficiaries and health care providers since 1998. Officials from the organization that manages the smart card system in France told us that implementation of the system had been a slow process in part because many providers lacked the IT equipment—such as computers and printers—needed to manage their health care practices and had to obtain that equipment before being able to participate in the card system. Health care providers’ resistance to voluntarily adopting and using the smart cards—despite financial incentives to do so—also contributed to the delay in implementing the smart card system fully. Fourteen years after the implementation of the smart card system in France, about 95 percent of self-employed health care providers and 18 percent of hospital-based providers in France were using health care provider cards. While the initial cards for beneficiaries were distributed in 2 to 3 years, according to French officials, issuance of an updated beneficiary card with a picture has been a slower process. French officials explained that the process of adding a photograph to the beneficiary card and issuing the updated cards has been ongoing since 2007. As of September 2014, 35 percent of beneficiary cards in France being used for health care had been issued 15 years ago, according to the organization that represents health care insurers. In 1995, Germany implemented a memory-only smart card that included information such as name, address, and insurance status. The card was used to electronically transfer this information to the health care providers’ IT systems. According to a report by the German auditing agency, in 2003 Germany required that a new smart card containing a microprocessor chip and with the capability to add new functionality be implemented by January 2006. This report also indicated that due to technical problems and stakeholder disagreements, the initial roll out of the new cards did not occur until October 2011. By the end of 2013, almost all of the population insured through the statutory health insurance system had been issued the new cards and providers were equipped with the readers that could access information from both the new smart card and the previous memory-only smart card. However, German officials told us that the full transition to the new cards will not be complete until early 2015, when beneficiaries will no longer be able to use the memory-only cards. Currently, the new smart cards are being used in the same way as the memory-only card. According to officials in Germany, new applications will be added to the new card incrementally, with the ability to update insurance information on the card being the first application and then an expansion to storing emergency care information, such as allergies and any drug interactions. Officials explained that full implementation of the new smart card—with all of the applications added—will not be completed until 2018, more than 10 years later than mandated. The initial implementation of any new card system in Medicare could also be a lengthy process because CMS would need time to address the challenges that we described earlier. Similarly, experiences in both France and Germany have illustrated that updating a card system has the potential to be as lengthy and resource-intensive a process as the initial implementation. French officials noted that being clear about how an electronically readable card will be used and developing a system that can be easily updated are key lessons that the Medicare program should consider. Officials in France and Germany indicated that their governments implemented smart card systems to simplify and improve administrative processes in their health care systems. Specifically, both countries implemented a smart card as a means to move from a paper-based to an electronic billing and reimbursement process. In addition to administrative improvements, officials from both countries noted that the shift from paper to electronic billing and reimbursement has resulted in financial savings. For example, government officials in France told us that the estimated cost to process a paper claim is $2.40 per claim, while processing an electronic claim cost $0.20. Officials from France’s federal auditing agency claim that the cards have been largely successful, with 93 percent of claims being submitted electronically in 2014, resulting in an estimated savings of approximately $1.5 billion per year. However, according to officials from the organization that manages the beneficiary card system in France, it is difficult to isolate how much of that savings can be attributed specifically to the use of the smart cards, given that electronic billing and reimbursement could have been achieved by using technology other than an electronically readable card. German officials also reported, but did not quantify, savings associated with using smart cards to move to an electronic billing and reimbursement process. The cost savings that France and Germany report from moving to electronic billing would not necessarily be achievable for Medicare, which has a long-standing electronic claims processing system that enables both Medicare and health care providers to process claims faster and at a lower cost. Some health care providers have been submitting claims electronically since 1981, and by law Medicare has been prohibited from paying claims not submitted electronically since October 16, 2003, with limited exceptions. French and German government officials told us that it is important to ensure that the competing needs of stakeholders are discussed and addressed. Officials also stated that in their experience this part of the process generally required a significant time investment and should occur prior to the decision to implement any electronically readable card. For instance, officials from provider organizations in Germany told us that health care providers took issue with what they viewed as a continued emphasis on enhancing the administrative, rather than the clinical, features of the card. Officials explained that providers and hospitals had objected to the decision to add the ability to electronically update identity and insurance information before adding the ability to store emergency care information on the new smart card. They stated that the new smart card is currently being used the same way as the memory-only smart card—to electronically transfer a beneficiary’s identity and insurance information to the health care providers’ IT system—which provides no new benefits for providers relative to the memory-only smart cards. In both France and Germany, the government established independent organizations to address stakeholders’ needs. For example, officials from the independent organization in Germany told us that it has seven stakeholder groups, including the National Association of Statutory Health Insurance Funds as the sole representative of all health insurance funds and six umbrella organizations representing health care providers. Officials explained that each group is assigned a different share of interest in the organization, with the stakeholder group that funds the organization holding a 50 percent share. An organization like those established in France and Germany may not be necessary to solicit input from stakeholders in the United States. However, successful implementation of an electronically readable card system for the Medicare program would depend on stakeholder participation. An official from a health care billing and management organization told us that before implementation of any electronically readable cards for Medicare, CMS should obtain input from beneficiary and consumer advocacy groups on how the cards should be implemented. This official also told us that CMS would need to educate beneficiary and provider groups on the benefits of electronically readable cards and how to use them because beneficiary and provider buy-in would help CMS in implementing the cards. CMS officials confirmed that implementing an electronically readable card could result in a number of policy challenges that may cause resistance from provider and beneficiary advocacy organizations. CMS officials acknowledged that the agency would have to work with multiple stakeholders who have competing priorities if they were to move forward with the development and implementation of an electronically readable card. Furthermore, implementing an electronically readable card system for Medicare would be done in a different health IT landscape than France’s and Germany’s. Officials in both France and Germany told us that they began implementing their systems when health care providers’ use of IT systems was limited. However, in the United States, health IT is more advanced than it was in France and Germany when they first implemented the electronically readable cards. Nevertheless, according to officials from a U.S. health insurer, the disparate IT systems of health care providers in the United States will need to be modified in order to implement an electronically readable card system. French officials noted that implementation is easier when the electronically readable card system does not have to be built on top of existing hardware and software. Management of an electronically readable card system includes maintaining the technical infrastructure as well as continuously producing and issuing the cards. Officials from France and Germany reported that the process of managing an electronically readable card system is costly and needs to be taken into account when deciding whether to implement such a system. The independent organizations that are responsible for addressing stakeholders’ needs related to the card systems in France and Germany also have an ongoing role in managing these systems. In France, an additional organization manages the health care provider card system. (See table 1.) Officials in both France and Germany told us that they experienced significant costs related to managing the system beyond initial implementation costs. For example, in France, government officials explained that it costs about $37 million annually to maintain the infrastructure for the beneficiary card and nearly $31 million per year in IT and human resources costs for the provider card. In addition, there are annual costs to produce, issue, and deactivate the cards. In France, for instance, the cost to produce and issue beneficiary cards is approximately $2.50 per card, and production and issuance costs for provider cards range from about $8 to $12 per card, depending on the method used to mail the card. In Germany, the National Association of Statutory Health Insurance Funds finances the organization that manages the technical infrastructure for the card system, though the individual insurance funds are responsible for producing and issuing the beneficiary smart cards. Officials from this organization told us that they are paid about $2.40 per beneficiary annually for the development of the infrastructure. In 2014, there were approximately 70 million beneficiaries using the electronically readable cards in Germany, which equates to about $168 million in development costs. U.S. policymakers would need to determine the extent to which CMS or other organizations would be responsible for the implementation and management of an electronically readable system for Medicare. Some of the responsibilities that the French and German organizations address, such as certifying the software, are currently being addressed by another agency within HHS.the appropriate agencies or organizations that should be involved in developing and implementing such a system. As consideration is given to whether to increase the functionality of the current Medicare beneficiary card, and whether to implement cards for providers, the planned use of the cards will guide the type of card technology that is needed. The planned use of the cards will also prompt additional discussions regarding card management processes and standards, including whether use would be mandatory, whether PINs or biometric factors would be used in addition to the cards, whether enrollment and card issuance processes would need to be updated, and what information would be stored on the card. We found that electronically readable cards would have a limited effect on program integrity, but could aid administrative processes. Ultimately, a decision about whether to implement an electronically readable card will rest upon a determination regarding the costs and benefits of electronically readable cards compared to the current paper card or other strategies and solutions. The success of any electronically readable card system will also depend on participation from health care providers, and therefore any planned use will need to take provider costs and potential challenges into consideration. Finally, as demonstrated by the experiences in France and Germany with smart cards, implementing and maintaining an electronically readable Medicare card system would likely require considerable time and effort. We provided a draft of this report to HHS for comment. HHS provided technical comments, which we incorporated as appropriate. In addition, we obtained comments from officials from the Smart Card Alliance, an organization that represents the smart card industry. The officials emphasized the greater capability of smart cards to authenticate transactions and secure information on the cards than other electronically readable card options. Smart Card Alliance officials commented that the way in which CMS has indicated that it would implement electronically readable cards in Medicare would diminish the cards’ potential to limit fraud. Further, the officials commented that we underestimated the potential of electronically readable cards to further CMS’s program integrity efforts, particularly CMS’s ability to identify potential fraud through postpayment claims analysis. The officials said that CMS could have greater assurance in the legitimacy of claims associated with card use and that the agency could better focus its analysis on claims in which cards were not used. Finally, the officials commented that possible challenges applying NIST standards for using electronically readable cards in Medicare should not preclude card implementation because standards that better align with the needs of the program could be developed. We believe that our report accurately characterizes the potential effects of electronically readable cards on Medicare program integrity efforts, though we modified several statements to improve clarity. We also incorporated the Alliance’s technical comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, the National Coordinator for Health Information Technology, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To examine the potential benefits and limitations associated with the use of electronically readable cards in Medicare and the steps CMS and Medicare providers would need to take to implement and use electronically readable cards, we interviewed officials from the agencies and organizations listed in table 2. Several European countries, including France and Germany, use electronically readable cards for health care purposes, such as transferring identity and insurance information electronically from the card to a health care provider’s IT system. France and Germany have long- standing experience with the use of such cards. As part of our research on the potential use of electronically readable cards in Medicare, we visited France and Germany to learn about how they developed and used the cards. This appendix provides information on each country’s health care system, and how electronically readable cards are used within that system. Health care coverage in France has been universal since 2000. All residents may receive publicly financed health care through noncompetitive health insurance funds (commonly referred to as statutory health insurance funds)—six entities whose membership is based on the occupation of the individual. Specifically, eligibility to receive statutory health insurance is granted either through employment (to salaried or self-employed working persons and their families) or as a benefit to persons (and their families) who have lost their jobs to students, and to retired persons. The state covers the health insurance costs of residents not eligible for statutory health insurance, such as unemployed persons. The French system of health insurance is composed of two tiers. The first tier provides basic coverage through the statutory health insurance funds, which cover about 75 percent of household medical expenses. The statutory health insurance coverage includes hospital care and treatment in public or private rehabilitation; outpatient care provided by general practitioners, specialists, dentists, and midwives; and prescription drugs. The second tier consists of complementary and supplementary voluntary health insurance coverage provided by mutual (not-for-profit) or private insurers that pay for services not covered by statutory health insurance. France’s health care system uses two electronically readable cards—a beneficiary card and a health care provider card—as part of its billing and reimbursement processes; both are smart cards. Generally, beneficiaries make payment to the health care provider when services are delivered, and the health insurance funds reimburse the beneficiary. In certain circumstances, such as when services are provided by pharmacists and radiologists, third-party payment or reimbursement directly to the health care provider is used. When services are provided, the beneficiary and the health care provider both insert their cards into a two-card reader at the point of service. The software enables the health care provider to enter medical consultation information into the provider’s IT system. That information is used to generate an electronic health claim form, which is sent to the statutory health insurance fund and the supplementary voluntary health insurance fund for payment to either the beneficiary or the health care provider. (See fig. 2.) Health insurance has been mandatory for all citizens and permanent There are two primary sources of residents of Germany since 2009.health insurance in Germany—the publicly financed health insurance (commonly referred to as the statutory health insurance system) and the private health insurance system.system, which covered about 86 percent of the population in 2013, health insurance is generally provided by competing, not-for-profit, nongovernmental health insurance funds (called “sickness funds”). As of January 2013, there were 134 sickness funds operating under the statutory health insurance system. Under the statutory health insurance All employed citizens earning less than $4,874 per month ($70,489 per year) as of 2013 are covered by the statutory health insurance system, Individuals and they and their dependents are covered without charge. whose gross wages exceed the threshold, civil servants, and those who are self-employed can choose to participate in statutory health insurance or purchase private health insurance, which covered about 11 percent of the population in 2013. Statutory health insurance coverage includes preventive services, inpatient and outpatient hospital care, physician services, prescription drugs and sick leave compensation. Private health insurance covers minor benefits not covered by statutory health insurance, access to better amenities, and some copayments (e.g., for dental care). Germany first introduced a beneficiary, memory-only health insurance smart card in 1995. German citizens who were members of a public, statutory health insurance fund were issued the memory-only card, which contained beneficiary insurance information. This card was used to electronically transfer the information stored on the card to health care providers’ IT systems. More recently, Germany initiated a project to modernize its health care system with the introduction of a secure network infrastructure. Part of this project included updating the beneficiary smart card with a card that has the capability to store and process information. In 2011, Germany began issuing the updated smart card, which contains the same information as the memory-only card and is currently being used in the same way, which is to auto-populate health providers’ IT systems. According to German officials, new applications will be added incrementally to the updated smart card, with the card eventually being used to access and update online beneficiary health insurance information and exchange beneficiary medical information. As of September 2014, officials told us that all applications will not be added until 2018. Kathleen M. King, (202) 512-7114 or [email protected]. In addition to the contact named above, Lori Achman, Assistant Director; George Bogart; Michael Erhardt; Deitra Lee; Elizabeth T. Morrison; Vikki Porter; Maria Stattel; and Kate Tussey made key contributions to this report. | Proposals have been put forward to replace the current paper Medicare cards, which display beneficiaries' Social Security numbers, with electronically readable cards, and to issue electronically readable cards to providers as well. Electronically readable cards include cards with magnetic stripes and bar codes and “smart” cards that can process data. Proponents of such cards suggest that their use would bring a number of benefits to the program and Medicare providers, including reducing fraud through the authentication of beneficiary and provider identity at the point of care, furthering electronic health information exchange, and improving provider record keeping and reimbursement processes. GAO was asked to review the ways in which electronically readable cards could be used for Medicare. This report (1) evaluates the different functions and features of electronically readable cards, (2) examines the potential benefits and limitations associated with the use of electronically readable cards in Medicare, (3) examines the steps CMS and Medicare providers would need to take to implement and use electronically readable cards, and (4) describes the lessons learned from the implementation and use of electronically readable cards in other countries. To do this, GAO reviewed documents, interviewed stakeholders, and conducted visits to two countries with electronically readable card systems. The Centers for Medicare & Medicaid Services (CMS)—the agency that administers Medicare—could use electronically readable cards in Medicare for a number of different purposes. Three key uses include authenticating beneficiary and provider presence at the point of care, electronically exchanging beneficiary medical information, and electronically conveying beneficiary identity and insurance information to providers. The type of electronically readable card that would be most appropriate depends on how the cards would be used. Smart cards could provide substantially more rigorous authentication than cards with magnetic stripes or bar codes, and provide greater security and storage capacity for exchanging medical information. All electronically readable cards could be used to convey beneficiary identity and insurance information since they all have adequate storage capacity to contain such information. Using electronically readable cards to authenticate beneficiary and provider presence at the point of care could curtail certain types of Medicare fraud, but would have limited effect since CMS officials stated that Medicare would continue to pay claims regardless of whether a card was used due to legitimate reasons why a card may not be present. CMS officials and stakeholders told us that claims should still be paid even when cards are not used because they would not want to limit beneficiaries' access to care. Using electronically readable cards to exchange medical information is not part of current federal efforts to facilitate health information exchange and, if used to supplement current efforts, it would likely involve challenges with interoperability and ensuring consistency with provider records. Using electronically readable cards to convey identity and insurance information to auto-populate and retrieve information from provider information technology (IT) systems could reduce reimbursement errors and improve medical record keeping. To use electronically readable cards to authenticate beneficiaries and providers, CMS would need to update its claims processing systems to verify that the cards were swiped at the point of care. CMS would also need to update its current card management processes, including issuing provider cards and developing standards and procedures for card use. Conversely, using the cards to convey beneficiary identity and insurance information might not require updates to CMS's IT systems or card management practices. For all potential uses, Medicare providers could incur costs and face challenges updating their IT systems to use the cards. The experiences of France and Germany demonstrate that an electronically readable card system can be implemented on a national scale, though implementation took years in both countries. It is unclear if the cost savings reported by both countries would be achievable for Medicare since the savings resulted from using the cards to implement electronic billing, which Medicare already uses. Both countries have processes in place to manage competing stakeholder needs and oversee the technical infrastructure needed for the cards. The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FFS Medicare generally pays providers directly for the services they perform—such as paying physicians for office visits—based on predetermined payment formulas. FFS payments are based on claims data received directly from providers. CMS relies primarily on prepayment automated checks and postpayment medical reviews to identify and recover FFS improper payments. Under the Improper Payments Information Act of 2002 (IPIA), as amended, CMS reported that the FFS improper payment rate was 11 percent for fiscal year 2016. Two-thirds of the FFS improper payment rate, according to CMS, was a result of insufficient documentation. CMS and its contractors engage in a number of activities to prevent, identify, and recover improper payments in FFS. The Patient Protection and Affordable Care Act of 2010 included provisions designed to strengthen Medicare’s provider enrollment and screening requirements. Subsequently, CMS implemented a revised screening process for new and existing providers and suppliers based on the potential risk of fraud, waste, and abuse. In November 2016, we evaluated this revised screening process and found that CMS used the new process to screen and revalidate over 2.4 million unique applications and existing enrollment records. As a result of this process, over 23,000 new applications were denied or rejected, and over 703,000 existing enrollment records were deactivated or revoked. CMS estimates that this process saved $2.4 billion in Medicare payments to ineligible providers and suppliers from March 2011 to May 2015. Also in FFS, CMS uses different types of contractors to conduct prepayment and postpayment reviews of Medicare claims at high risk for improper payments. We examined the review activities of these contractors and in April 2016 reported that using prepayment reviews to deny improper claims and prevent overpayments is consistent with CMS’s goal to pay claims correctly the first time. In addition, prepayment reviews can better protect Medicare funds because not all overpayments can be collected. We recommended that CMS seek legislation to allow Recovery Auditors, who are currently paid on a postpayment contingency basis from recovered payments, to conduct prepayment reviews. Although CMS did not concur with this recommendation, we continue to believe CMS should seek legislative authority to allow Recovery Auditors to conduct these reviews. Medicare Administrative Contractors (MACs) process Medicare claims, identify areas vulnerable to improper billing, and develop general education efforts focused on these areas. In March 2017, we evaluated MACs’ provider education efforts to help reduce improper billing. We found that CMS collects limited information about how the efforts focus on the areas MACs identify as vulnerable to improper billing, and recommended that CMS require MACs to report in sufficient detail to determine the extent to which their provider education efforts focus on vulnerable areas. According to CMS, the agency has updated its reporting guidance and MACs will begin reporting more detailed information beginning in July 2017. Whereas Medicare pays FFS providers for services provided, Medicare pays MAOs a fixed monthly amount per enrollee regardless of the services enrollees use. To identify and recover MA improper payments resulting from unsupported data submitted by MAOs for risk adjustment purposes, CMS conducts two types of RADV audits: national RADV activities and contract-level RADV audits. Both types determine whether the diagnosis codes submitted by MAOs are supported by a beneficiary’s medical record. CMS conducts national RADV activities annually to estimate the national IPIA improper payment rate for MA. For 2016, CMS estimated that 71 percent of the improper payments resulted from the insufficient medical record documentation MAOs submitted to CMS that did not support diagnoses they had previously submitted to CMS. The second type of RADV audit, contract-level audits, seeks to identify and recover improper payments from MAOs, and thus deter MAOs from submitting inaccurate diagnosis information. CMS conducted contract- level audits of 2007 payments for a sample of enrollees in 32 MA contracts. CMS’s goal is to conduct contract-level audits annually to recover improper payments efficiently, among other things. It plans to recoup overpayments by calculating a payment error rate for a sample of enrollees in each audited contract and extrapolating the error rate to estimate the total amount of improper payments made under the contract. CMS has RADV audits underway for three payment years—2011, 2012, and 2013. In general, CMS audits about 5 percent of contracts for each year, or roughly 30 contracts. CMS calculates a beneficiary’s risk score—a relative measure of projected Medicare spending—based on both demographic characteristics and health status (diagnoses). The agency uses Medicare data to determine a beneficiary’s demographic characteristics; however, it must rely on data submitted by MAOs for health status information. CMS requires MAOs to submit diagnosis codes for each beneficiary in a contract in order to calculate risk scores. Since 2004, CMS has used the Risk Adjustment Processing System (RAPS) to collect diagnosis information from MAOs. In 2012, CMS began requiring MAOs to submit encounter data. Such data include diagnosis and treatment information for all medical services and items provided to an enrollee, with a level of detail similar to FFS claims. Since 2015, CMS has used both RAPS and encounter data submitted by MAOs to risk adjust MA payments. When CMS proposed collecting encounter data in 2008, the agency stated it would use the data for risk adjustment and may also use them for specified additional payment and oversight purposes. CMS has recognized the importance of ensuring that the data collected are complete—representing all encounters for all enrollees—and accurate— representing a correct record of all encounters that occurred—given the important functions for which the data will be applied. In our 2016 report, we found several factors that hamper CMS’s recovery activities, including its failure to select contracts for audit that have the greatest potential for payment recovery, delays in conducting CMS’s first two RADV payment audits, and its lack of specific plans or a timetable for incorporating Recovery Audit Contractors (RACs) into the MA program to identify improper payments and help with their recovery. Our 2016 report found that the results from the RADV audits of 2007 payments indicated that the scores CMS calculates to identify contracts that are candidates for audit, called coding intensity scores, were not strongly correlated with the percentage of unsupported diagnoses. CMS defines coding intensity as the average change in the risk score component specifically associated with the reported diagnoses for the beneficiaries in each contract. Increases in coding intensity measure the extent to which the estimated medical needs of the beneficiaries in a contract increase from year to year; thus, contracts whose beneficiaries appear to be getting “sicker” at a relatively rapid rate, based on the information submitted to CMS, will have relatively high coding intensity scores. Figure 1 shows, for example, that CMS reported that the percentage of unsupported diagnoses among the high coding intensity contracts it audited (36 percent) was nearly identical to the percentage among the medium coding intensity contracts (35.7 percent). Our report also found that the RADV audits were not targeted to contracts with the highest potential for improper payments. We identified two reasons that the RADV audits were not targeted on the contracts with the greatest potential for recoveries. The first reason is that the coding intensity scores have shortcomings. For example, our report found that CMS’s calculation may be based on scores that are not comparable across contracts, because the years of data used for each contract may differ, and there are known year-to-year differences in coding intensity scores. In addition, CMS’s calculation does not distinguish between diagnoses likely coded by providers and diagnoses subsequently coded by MAOs. Medical records that providers create from diagnoses are apt to support the diagnoses better than diagnoses subsequently coded by the MAO through medical record review. CMS has a method available to it—the Encounter Data System—that will distinguish between the two diagnoses. Although using encounter data would help target the submitted diagnoses that may be most likely related to improper payments, CMS has not outlined plans for using it. Furthermore, CMS follows contracts that are renewed or consolidated under a different existing contract within the same MAO, but CMS’s coding intensity calculation does not incorporate prior risk scores from an earlier contract into the MAO’s renewed contract. This could result in an improper payment risk if MAOs move beneficiaries with higher risk scores, such as those with special needs, into one consolidated contract. The second reason audits are not targeted to the contracts with the greatest potential for recovery is that CMS does not always use the information available to it to select audit contracts with the highest potential for improper payments. CMS did not always target the contracts with the highest coding intensity scores, use results from prior contract- level RADV audits, account for contract consolidation, or account for contracts with high enrollment. For example, only four of the contracts selected for the 2011 RADV audit had coding intensity scores at the 90th percentile or above. Even though we found that coding intensity scores are not strongly correlated with diagnostic discrepancies, they are still somewhat correlated. Also, CMS’s 2011 contract selection methodology did not consider results from the agency’s prior RADV audits, potentially overlooking information indicating contracts with known improper payment risk. Finally, even though the potential dollar amount of improper payments to MAOs with high rates of unsupported diagnoses is likely greater when contract enrollment is large, CMS officials stated that the 2011 contract-level RADV audit contract selection did not account for contracts with high enrollment. We made two recommendations to address these issues: We recommended that (1) CMS improve the accuracy of coding intensity calculations, and (2) modify its processes for selecting contracts for RADV audit to focus on those most likely to have improper payments. In July 2017, CMS officials told us that the agency is working to implement these recommendations regarding the selection of contracts for audit. These officials said that CMS is reevaluating the design of the RADV audits to ensure its rigor in the context of all the payment error data acquired since the original design of the RADV audits, including an examination of whether coding intensity is the best criterion to use to select contracts for audit. Our 2016 report found that prior contract-level RADV audits have been ongoing for years, and CMS lacks an annual timetable to conduct and complete audits. CMS officials reported at that time that the current and previous contract-level RADV audits had been ongoing for several years. CMS has audits for payment years 2011, 2012, and 2013 underway. We concluded that this slow progress in completing audits conflicted with CMS’s goal of conducting contract-level RADV audits annually, and slowed recovery of improper payments. CMS lacked a timetable that would help the agency complete these contract-level audits annually. In this regard, CMS had not followed established project management principles, which call for developing an overall plan to meet strategic goals and to complete projects in a timely manner. In addition to the lack of a timetable, we found other factors that lengthened the time frame of the contract-level audit process. The sequential notification of MAOs that identify contracts selected for audit and then, sometimes months later, identify the beneficiaries under these contracts creates a time gap that hinders the agency from conducting annual audits. Technology problems with CMS’s system for receiving medical records are the main cause of the delay in completing CMS’s contract-level audits of 2011 payments. Additional technical issues with other systems led CMS to more than triple the medical record submission time frame for the 2011 audits. Our report found that disputes and appeals of contract-level RADV audits have also continued for years, and CMS has not incorporated measures to expedite the process. Nearly all of the MAOs whose contracts were included in the 2007 contract-level RADV audit cycle disputed at least one diagnosis finding following medical record review. CMS stated that MAOs disputed a total of 624 (4.3 percent) of the 14,388 audited diagnoses, and that the determinations on these disputes, which were submitted from March through May 2013, were not complete until July 2014. In addition, because the dispute process took a year and a half to complete, CMS officials stated that it did not receive all 2007 appeal requests for hearing officer review until August 2014. The hearing officer adjudicated or received a withdrawal request for 377 of the 624 appeals from August 2014 through September 2015. For the 2011 audit cycle, CMS officials stated that the medical record dispute process will be incorporated into the appeal process. Thus, MAOs can request reconsideration of medical record review determinations concurrent with the appeal of payment error calculations, rather than sequentially, as was the case for the 2007 cycle. While this change may help, the new process does not set time limits for when reconsideration decisions must be issued. Lack of explicit time frames for appeal decisions at reconsideration hinders CMS’s collection of improper payments because the agency cannot recover extrapolated overpayments until the MAO exhausts all levels of appeal, and the lack of time frames is inconsistent with established project management principles. We made two recommendations to address these issues: We recommended that CMS take steps to improve the timeliness of the RADV audit process. In July 2017, CMS officials told us that, as part of the agency’s efforts to consolidate program integrity initiatives into one center, the decision was made to transition RADV contract- level audits to the CMS Center for Program Integrity (CPI) at the end of 2016. With the transition, CMS is implementing a formal project management structure to facilitate the timeliness of the audit process. We also recommended that CMS require that reconsideration decisions be rendered within a specified number of days, similar to other time frames in the Medicare program. In July 2017, CMS officials told us that the agency is actively considering options for expediting the appeals process. Our 2016 report found that CMS had not expanded the RAC program to MA, as it was required to do by the end of 2010 by the Patient Protection and Affordable Care Act. Implementing an MA RAC would help CMS address the resource requirements of conducting contract-level audits. In 2014, CMS issued a request for proposals for an MA RAC, which would audit improper payments in three areas of MA, but CMS officials told us that CMS did not receive any proposals to do the work in those audit areas, and that its goal was to reissue the MA RAC solicitation in 2015. CMS reconsidered the audit work in the request for the MA RAC. In December 2015, CMS issued a request for information seeking industry comment on how an MA RAC could be incorporated into CMS’s existing contract-level RADV audit framework. In the request, CMS stated that it was seeking an MA RAC to help the agency expand the number of MA contracts subject to audit each year, and stated that its ultimate goal is to have all MA contracts subject to either a contract-level RADV audit or another audit that would focus on specific diagnoses determined to have a high probability of being erroneous. Officials from three Medicare FFS RACs all told us their organizations had the capacity and willingness to conduct contract-level RADV audits. We recommended that CMS develop specific plans for incorporating a RAC into the RADV program. In July 2016, CMS described to us its initial steps to meet this goal. In July 2017, CMS officials told us that the agency is evaluating its strategy for the MA RAC with CMS leadership. In July 2014, we recommended that CMS complete all the steps necessary to validate encounter data, including performing statistical analyses, reviewing medical records, and providing MAOs with summary reports on CMS’s findings, before using the data to risk adjust payments or for other intended purposes. In our 2017 report, we found that CMS had made limited progress toward validating encounter data. (See fig. 2.) As of January 2017, CMS had begun compiling basic statistics on the volume and consistency of data submissions and preparing automated summary reports for MAOs indicating the diagnosis information used for risk adjustment; however CMS had not yet taken other important steps identified in its Medicaid protocol, which we used for comparison. The steps CMS had not yet taken as of our January 2017 report are: Establish benchmarks for completeness and accuracy. This step would establish requirements for collecting and submitting MA encounter data. Without benchmarks, CMS does not have objective standards against which to hold MAOs accountable for complete and accurate data reporting. Conduct analyses to compare with established benchmarks. This would help ensure accuracy and completeness. Without such analyses, CMS has limited ability to detect potentially inaccurate or unreliable data. Determine sampling methodology for medical record review and obtain medical records. Medical record review would help ensure the accuracy of encounter data. Without these reviews, CMS cannot substantiate the information in MAO encounter data submissions and lacks evidence for determining the accuracy of encounter data. Summarize analyses to highlight individual MAO issues. This step would provide recommendations to MAOs for improving the completeness and accuracy of encounter data. Without actionable and specific recommendations from CMS, MAOs might not know how to improve their submissions. In July 2014, we also recommended that CMS establish specific plans and time frames for using the data for all intended purposes in addition to risk adjusting payments to MAOs. We found in our 2017 report that CMS had made progress in defining its objectives for using MA encounter data for risk adjustment and in communicating its plans and time frames to MAOs. CMS reported it plans to fully transition to using MA encounter data for risk adjustment purposes by 2020. However, even though CMS had formed general ideas of how it would use MA encounter data for purposes other than risk adjustment, as of January 2017 it had not specified plans and time frames for most of the additional purposes for which the data may be used. These other purposes include activities to support program integrity. In July 2017, CMS officials told us that the agency had not taken any further actions in response to our July 2014 recommendations. Because CMS is making payments that are based on data that have not been fully validated for completeness and accuracy, the soundness of billions of dollars in Medicare expenditures remains unsubstantiated. In addition, without planning for all of the authorized uses, the agency cannot be assured that the amount and types of data being collected are necessary and sufficient for specific purposes. Given CMS’s limited progress in planning and time frames for all authorized uses of the data, we continue to believe CMS should implement our July 2014 recommendations that CMS should establish specific plans for using MA encounter data and thoroughly assess data completeness and accuracy before using the data to risk adjust payments or for other purposes. In response to our 2014 recommendations, the Department of Health and Human Services did not specify a date by which CMS would develop plans for all authorized uses of the data and did not commit to completing data validation before using the data for risk adjustment in 2015. CMS began using encounter data for risk adjustment in 2015, although it had not completed activities to validate the data. In conclusion, Medicare remains inherently complex and susceptible to improper payments. Therefore, actions CMS takes to ensure the integrity of the MA program by identifying, reducing, and recovering improper payments would be critical to safeguarding federal funds. Chairman Buchanan, Ranking Member Lewis, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact James Cosgrove at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Martin T. Gahart (Assistant Director), Aubrey Naffis (Analyst-in-Charge), Manuel Buentello, Elizabeth T. Morrison, Jennifer Rudisill, and Jennifer Whitworth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | GAO has designated Medicare as a high-risk program because of its size, complexity, and susceptibility to mismanagement and improper payments, which reached an estimated $60 billion in fiscal year 2016. CMS contracts with MAOs to provide services to about one-third of all Medicare beneficiaries, and paid MAOs about $200 billion for their care in 2016. CMS's payments to the MAOs vary based on the health status of beneficiaries. For example, an MAO receives a higher risk-adjusted payment for an enrollee with a diagnosis of diabetes than for an otherwise identical enrollee without this diagnosis. Improper payments in MA arise primarily from diagnosis information unsupported by medical records that leads CMS to increase its payments. This testimony is based on GAO's 2016 and 2017 reports addressing MA improper payments and highlights (1) factors that have hindered CMS's efforts to identify and recover improper payments through payment audits and (2) CMS's progress in validating encounter data for use in risk adjusting payments to MAOs. For these reports, GAO reviewed research and agency documents, analyzed data from ongoing RADV audits, and compared CMS's activities with the agency's protocol for validating Medicaid encounter data and federal internal control standards. GAO interviewed CMS officials for both reports, and also asked for updates on the status of GAO's prior recommendations for this statement. The Centers for Medicare & Medicaid Services (CMS) estimated that about $16 billion—nearly 10 percent—of Medicare Advantage (MA) payments in fiscal year 2016 were improper. To identify and recover MA improper payments, CMS conducts risk adjustment data validation (RADV) audits of prior payments. These audits determine whether the diagnosis data submitted by Medicare Advantage organizations (MAOs), which offer private plan alternatives to fee-for-service (FFS) Medicare, are supported by a beneficiary's medical record. CMS pays MAOs a predetermined monthly amount for each enrollee. CMS uses a process called risk adjustment to project each enrollee's health care costs using diagnosis data from MAOs and demographic data from Medicare. In its 2016 report, GAO found several factors impeded CMS's efforts to identify and recover improper payments, including: RADV audits were not targeted to contracts with the highest potential for improper payments. The agency's method of calculating improper payment risk for each contract, based on the diagnoses reported for the contract's beneficiaries, had shortcomings, and CMS did not use other available data to select the contracts with the greatest potential for improper payment recovery. Substantial delays in RADV audits in progress jeopardize CMS's goal of eventually conducting annual RADV audits. CMS had RADV audits underway for payment years 2011, 2012, and 2013. CMS had not expanded the use of Recovery Audit Contractors (RAC) to the MA program as required by law in 2010. RACs have been used in other Medicare programs to recover improper payments for a contingency fee. GAO recommended that CMS improve the accuracy of its methodology for identifying contracts with the greatest potential for improper payment recovery, modify the processes for selecting contracts to focus on those most likely to have improper payments, and improve the timeliness of the RADV audit process. CMS reported in July 2017 that it had taken initial actions to address these recommendations, but none had been fully implemented. GAO also recommended that CMS develop specific plans for incorporating a RAC into the RADV program. In July 2017, CMS reported that the agency is evaluating its strategy for the MA RAC with CMS leadership. CMS has begun to use encounter data, which are similar to FFS claims data, along with diagnosis data from MAOs to help ensure the proper use of federal funds by improving risk adjustment in the MA program. Encounter data include more information about the care and health status of MA beneficiaries than the data CMS uses now to risk adjust payments. In its January 2017 report, GAO found CMS had made progress in developing plans to use encounter data for risk adjustment. However, CMS had made limited progress in validating the completeness and accuracy of MA encounter data, as GAO recommended in 2014. GAO continues to believe that CMS should establish plans for using encounter data and thoroughly assess the data for completeness and accuracy before using it to risk adjust payments. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Women represent a small but rapidly growing segment of the nation’s veteran population. In 1982, there were about 740,000 women veterans. By 1997, that number had increased by 66 percent to over 1.2 million, or 4.8 percent, of the veteran population. Today, women make up nearly 14 percent of the active duty force and, with the exception of the Marine Corps, 20 percent of new recruits. By 2010, women are expected to represent over 10 percent of the total veteran population. Like male veterans, female veterans who serve on active duty in the uniformed services for the minimum amount of time specified by law and who were discharged, released, or retired under conditions other than dishonorable are eligible for some VA health care services. Historically, veterans’ eligibility for health care services depended on factors such as the presence and extent of service-connected disabilities, income, and period and conditions of military service. In 1996, the Congress passed the Veterans Health Care Eligibility Reform Act (P.L. 104-262), which simplified the eligibility criteria and made all veterans eligible for comprehensive outpatient care. To manage its health care services, the act requires VA to establish an enrollment process for managing demand within available resources. The seven priorities for enrollment are (1) veterans with service-connected disabilities rated at 50 percent or higher; (2) veterans with service-connected disabilities rated at 30 or 40 percent; (3) former prisoners of war, veterans with service-connected disabilities rated at 10 or 20 percent, and veterans whose discharge from active military service was for a compensable disability that was incurred or aggravated in the line of duty or veterans who with certain exceptions and limitations are receiving disability compensation; (4) catastrophically disabled veterans and veterans receiving increased non-service-connected disability pensions because they are permanently housebound; (5) veterans unable to defray the cost of medical care; (6) all other veterans in the so-called “core” group,including veterans of World War I and veterans with a priority for care based on presumed environmental exposure; and (7) all other veterans. VA may create additional subdivisions within each of these enrollment groups. With the growing women veteran population came the need to provide health care services equivalent to those provided to men. Over the past 15 years, GAO, VA, and the Advisory Committee on Women Veterans have assessed VA services available to women veterans. In 1982, GAO reported that VA lacked adequate general and gender-specific health care services, effective outreach for women veterans, and facilities that provided women veterans appropriate levels of privacy in health care delivery settings. In 1992, GAO reported that VA had made progress in correcting previously identified deficiencies, but some privacy deficiencies and concerns about availability and outreach remained. In response to concerns about the availability of women veterans’ health care and to improve VA’s delivery of health care to women veterans, the Congress enacted the Women Veterans Health Programs Act of 1992 (P.L. 102-585). This act authorized new and expanded health care services for women. In 1993, VA’s Office of the Inspector General (OIG) for Health Care Inspections reported that problems—such as women veterans’ not always being informed about eligibility for health care services as well as VA’s lack of appropriate accommodations, medical equipment, and supplies to treat women patients in VA medical facilities—still existed. In December 1993, the Secretary of the Department of Veterans Affairs, established VA’s first Women Veterans’ Program Office (WVPO). In November 1994, the Congress enacted legislation (P.L. 103-446) that required VA to create a Center for Women Veterans to oversee VA programs for women. As a result, WVPO was reorganized into the Center for Women Veterans. The Center Director reports directly to the VA Secretary. In compliance with the Government Performance Results Act, VA has a strategic plan that includes goals for (1) monitoring the trends in women’s utilization of VA services from fiscal years 1998 through 2001, (2) reporting on barriers and actions to address recommendations to correct them, and (3) assessing progress in correcting deficiencies from fiscal years 1999 through 2001. VA’s performance plan also includes goals that target women veterans currently enrolled in VA for aggressive prevention and health promotion activities to screen for breast and cervical cancer. VA has taken several actions to remove barriers identified by GAO, VA, and women veteran proponents over the years that prevent women veterans from obtaining care in VA medical facilities. First, VA has increased outreach efforts to inform women veterans of their eligibility for benefits and health care services. However, it has not evaluated these efforts, so it is not known how knowledgeable women veterans are about their eligibility for health care services. VA has also designated coordinators to assist women veterans in accessing the system. In addition, VA has identified and begun to correct patient privacy deficiencies in inpatient and outpatient settings. VA has surveyed its facilities on two occasions to determine the extent to which privacy deficiencies exist. In fiscal year 1998, VA spent more than $67 million correcting deficiencies and has developed plans for correcting remaining deficiencies. However, VA continues to face obstacles addressing the inpatient mental health needs of women veterans in a predominantly male environment and has established a task force to look at this and other issues. Over the last few years, VA has increased its outreach efforts to inform women veterans of their eligibility for care in response to problems highlighted by GAO, VA, and veteran service organizations between 1982 and 1994. We and others reported that (1) women veterans were not aware that they were eligible to receive health care in VA and (2) VA did not target outreach to women veterans, routinely disseminate information to service organizations with predominantly female memberships, or adequately inform women of changes in their eligibility. To address these concerns, VA has targeted women veterans during outreach efforts at the headquarters, regional, and local levels. At the headquarters level, a number of outreach strategies have been implemented. For example, the Center for Women Veterans, as part of its strategic and performance goals for 1998 through 2000, is placing greater emphasis on the importance of outreach to women and the need for improved communication techniques. Since the inception of WVPO and the Center for Women Veterans, VA has held an average of 15 to 20 town meetings a year, along with other informational seminars. The Center also provided informational seminars at the annual conventions of the Women’s Army Corp and the Women Marines; American Legion; American Veterans of World War II, Korea, and Vietnam; and Disabled American Veterans. The Center also provided information on VA programs for women veterans and other women veterans’ issues at national training events for county and state veteran service officers and their counterparts in the national Veterans’ Service Organizations. Further, the Center established a web site within the VA home page to provide women veterans with information about health care services and other concerns as well as the opportunity to correspond with the Center via electronic mail. At the regional and local levels, VBA regional and benefit offices, VA medical centers, and Vet Centers display posters, brochures, and other materials that focus specifically on women veterans. They also send representatives to distribute these materials and talk to women veterans during outreach activities, such as health fairs and media events, that are used to publicize the theme that “Women Are Veterans, Too.” The VA facilities we visited were conducting similar activities. For example, the medical center in New Orleans directed its Office of Public Relations to work closely with the women veterans coordinator to develop an outreach program. The New Orleans Vet Center women veterans coordinator told us that she expanded her outreach efforts to colleges with nursing schools in an effort to reach women veterans who do not participate in veteran-related activities. In addition, VBA regional offices coordinate with the Department of Defense to provide information on VA benefits and services to prospective veterans during Transition Assistance Program (TAP) briefings. In addition to providing information to active-duty personnel who plan to separate from the military on how to transition into civilian life, TAP briefings provide information on the benefits they may be eligible for as veterans as well as how to obtain them. Although VA has greatly increased its outreach efforts, it has not yet evaluated the effectiveness of these efforts. Women veterans organizations have acknowledged the increase in VA’s outreach efforts directed at women veterans but continue to express concern about whether women veterans are being reached and adequately informed about their eligibility for benefits and health care services. Several women veterans we talked with during our site visits said they found out by chance—during casual conversations—that they were eligible for care. Women veterans and agency staff acknowledged that “word of mouth” from satisfied patients appears to be one of the most effective ways to share information about various benefits and services to which women veterans may be entitled. In March 1998, the Advisory Committee for Women Veterans, the Center for Women Veterans, and the National Center for Veterans Statistics provided specific questions for inclusion in VA’s Survey of Veterans for Year 2000 to address the extent to which women veterans are becoming more knowledgeable about their eligibility for services. This survey should allow VA to assess the effectiveness of its outreach to women veterans. Women veterans coordinators assist in obtaining care, advocate for women veterans’ health care, and collaborate with medical center management to make facilities more sensitive to women veterans. This role was established in 1985 because women veterans did not know how to obtain health care services once they became aware of their eligibility for these services. However, in 1994, VA’s OIG reported that these coordinators often lacked sufficient training and time to perform effectively; many women veterans coordinators performed in this capacity on a part-time basis. VA has since provided women veterans coordinators training and more time to carry out their roles and help them provide better assistance to women veterans in accessing VA’s health care system and obtaining care. In an effort to make them more effective in this role, in 1994, VA implemented a national training program designed to increase women veterans coordinators’ awareness of their roles and familiarize them with women veterans’ issues. The program is administered by a full-time women veterans’ national education coordinator and staff at the Birmingham Regional Medical Education Center. In addition, the women veterans coordinators at VA’s medical centers in Tampa and Bay Pines developed a mini-residency training program for women veterans coordinators. This program, approved in 1995, is the only training program of its kind and is offered for newly appointed women veterans coordinators. To allow women veterans coordinators more time to perform their duties, in 1994, VA established positions for additional full-time women veteran coordinators at selected VA medical centers and four full-time VBA regional women veterans coordinators. As of January 1998, about 40 percent of the women veterans coordinators in VA medical facilities were full-time. According to VA’s Advisory Committee on Women Veterans, the women veterans coordinator program has proven to be one of the most successful initiatives recommended by the committee. Patient privacy for women veterans has been a long-standing concern, and VA acknowledges that the correction of physical barriers that limit women’s access to care in VA facilities will be an ongoing process. Between 1982 and 1994, GAO and VA’s OIG reported that physical barriers, including hospital wards with large open rooms having 8 to 16 beds and a lack of separate bath facilities, concerned women veterans and inconvenienced staff. Female patients had to compete with patients in isolation units for the limited number of private rooms in VA hospitals. Also, hospitals with communal bathrooms sometimes required staff to stand guard or use signs indicating that the bathroom was occupied by female patients. As required by section 322 of the Veterans’ Health Care Eligibility Reform Act of 1996, VA conducted nationwide privacy surveys of its facilities in fiscal years 1997 and 1998 to determine the types and magnitude of privacy deficiencies that may interfere with appropriate treatment in clinical areas. The surveys revealed numerous patient privacy deficiencies in both inpatient and outpatient settings. The fiscal year 1998 survey also showed that 117 facilities from all 22 Veterans Integrated Service Networks (VISN) spent nearly $68 million in construction funds in fiscal year 1998 to correct privacy deficiencies. Another 91 facilities from 20 of the 22 VISNs used a total of 130 alternatives to construction to eliminate deficiencies. These alternatives included actions such as initiating policy changes that would admit female patients only to those areas of the hospital that have the appropriate facilities or issuing policy statements that gynecological examinations would only be performed in the women’s clinics or contracted out. In addition, VISN and medical center staff developed plans for correcting and monitoring the remaining deficiencies. Although the 1998 survey showed that VA has improved the health care environment to afford women patients comfort and a feeling of security, the survey also revealed that many deficiencies still exist. (See table 1.) Of those facilities with deficiencies, the most prevalent inpatient deficiency was a lack of sufficient toilet and shower privacy, and the most prevalent outpatient deficiency was the lack of curtain tracks in various rooms. Consistent with VA’s strategic plan for fiscal years 1998 through 2003, a task force with representatives from VHA and the Center for Women Veterans was established to identify, prioritize, and develop plans for addressing five major issues related to women veterans’ health care, one of which was patient privacy. Further, VA plans to assess the progress made in correcting patient privacy deficiencies on an annual basis between fiscal years 1999 and 2001. VA requires that each facility have a plan for corrective action and a timetable for completion; VA has also directed each VISN to integrate the planned corrections into their construction programs. To correct the remaining deficiencies, VA projects it will spend $49.3 million in fiscal year 1999 and $41 million in fiscal year 2000. Over this same period, medical centers are estimated to spend approximately $647,000 more in discretionary funds to make some of these corrections. Beyond fiscal year 2000, VA projects it will spend an additional $77 million in capital funds; six facilities in VISNs 6 and 7 account for 58 percent of the total projected spending for beyond fiscal year 2000. While correcting privacy deficiencies has allowed VA to better accommodate women veterans’ health care needs, VA faces other problems accommodating women veterans who need inpatient mental health treatment. In the summer of 1998, VA established a task force of clinicians and women veterans coordinators to assess mental health services for women veterans and make recommendations by June 1999 for improving VA’s capacity to provide inpatient psychiatric care to this population. This task force is chaired by the Director of the Center for Women Veterans. VA data show that in fiscal year 1997, mental disorder was the most prevalent diagnosis—26.4 percent—for women veterans hospitalized. While inpatient psychiatric accommodations are available in VA facilities, in most instances the environment is not conducive to treating women veterans. In 1997, VA’s Center for Women Veterans reported that women veterans hospitalized on VA mental health wards for post-traumatic stress disorder, substance abuse, or other psychiatric diagnoses are often the only female on a ward with 30 to 40 males. This disparate ratio of women to men discourages women from discussing gender-specific issues and also makes it difficult to provide group therapy addressing women’s treatment issues. Women veterans also noted that they were concerned about their safety in this environment. These concerns included male patients engaging in inappropriate remarks or behavior and inappropriate levels of privacy. During our site visits, two women veterans expressed similar concerns. VA has inpatient psychiatric facilities that have separate psychiatric units for women veterans within five areas: Battle Creek, Michigan; Brockton-West Roxbury, Massachusetts; Central Texas Health Care System; Brecksville-Cleveland, Ohio; and Palo Alto, California, Health Care System. Women veterans often do not want to or are unable to leave families and support systems to travel to one of these facilities for treatment. Staff at one of the medical centers we visited in Florida told us that a few of their women patients who had been sexually traumatized would be better served in an inpatient setting, but the nearest suitable inpatient facilities were those in California and Ohio, and the patients did not want to go that far from home. VA’s greater emphasis on women veterans’ health has resulted in an increase in both the availability and use of general and gender-specific services, such as pap smears, mammograms, and reproductive health care. Some VA facilities offer a full complement of health care services, including gender-specific care, on a full-time basis in separate clinics designated for women. Others may only offer certain services on a contractual or part-time basis. According to program officials and the women veterans coordinators at the locations we visited, the variation in the availability and delivery of services is generally influenced by the medical center directors’ views of the health needs of the potential patient population, available resources, and demand for services. The increase in the availability of services and the emphasis on women veterans’ health have contributed to increases in the number of women veterans served and visits made, with the exception of inpatient care.Between fiscal years 1994 and 1997, the number of gender-specific services provided to women veterans increased about 42 percent, from over 85,000 to over 121,000. The total number of inpatient and outpatient visits made during this same period increased nearly 56 percent, from about 893,000 to almost 1.4 million. Over the past 10 years, GAO, VA’s OIG, and VA’s Advisory Committee on Women Veterans reported that VA was not providing adequate care to women veterans and was not equipped to do so. These organizations found that VA (1) was not providing complete physical examinations, including gynecological exams for women; (2) lacked the equipment and supplies to provide gender-specific care to women, such as examination tables with stirrups and speculums; and (3) lacked guidelines for providing care to women. As a result, VA began to place more emphasis on women veterans’ health and looked for ways to respond to these criticisms. For example, to ensure equity of access and treatment, VA designated women veterans’ health as a special emphasis program that merited focused attention. In 1983, VA began requiring medical centers to develop written plans that show how they will meet the health care needs of women veterans. At a minimum, these plans must define (1) that a complete physical examination for women is to include a breast and gynecological exam, (2) provisions for inpatient and outpatient gynecology services, and (3) referral procedures for necessary services unavailable at VA facilities. VA also procured the necessary equipment and supplies to treat women. In addition, VA established separate clinics for women veterans in some of its medical facilities. The locations with separate women’s clinics that we visited had written plans that contained the required information and the necessary equipment and supplies to provide gender-specific treatment to women. Also, we found evidence that women veterans coordinators were monitoring services provided to ensure proper care and follow-up. VA is more able to accommodate women patients than they were prior to the early 1990s. In 1997, VA provided inhouse 94 percent of the routine gynecological care sought by women veterans, even though its number of women’s clinics fell from 126 in 1994 to 96 in 1998. Some VA facilities closed their women’s clinics because of consolidation or implementation of primary care. Others are phasing their women’s programs into primary care, especially the facilities that had limited services available in the women’s clinic. This is consistent with VA’s efforts to enhance the efficiency of its health care system. For example, since September 1995, VA has or is in the process of merging the management and operations of 48 hospitals and clinic systems into 23 locally integrated systems. While women veterans can obtain gender-specific services as well as other health care services at most VA medical facilities, the extent to which care, especially gender-specific care, is available varies by facility. Some facilities offer a full array of routine and acute gender-specific services for women—such as pap smears, pelvic examinations, mammograms, breast health, gynecological oncology, and hormone therapy—while others offer only routine or preventive gender-specific care. Of the five sites we visited, two—Tampa and Boston—are Women Veterans’ Comprehensive Health Centers, which enable women veterans to obtain almost all of their health care within the center. Generally, these centers have full-time providers who may also be supported by other clinicians who provide specialty care on a part-time basis. For example, the Tampa Women Veterans’ Comprehensive Health Center, which provided care to about 3,000 women in 1997, is run by a full-time internist, who is supported by another internist, four nurse practitioner primary care providers, a gynecologist, a psychologist, a psychiatrist, and other health care and administrative support staff. The Tampa center as well as the Boston center provide their services 5 days a week. Other facilities offer less extensive services than those offered within the comprehensive centers. For example, the VA medical center in Washington, D.C., offers only routine or preventive gender-specific care by a nurse practitioner about 4.5 days a week; acute or more specialized gynecological care is only offered one-half day a week with the assistance of a gynecologist and general surgeon through a sharing agreement with a local Department of Defense facility. Other health care services are available within the medical center. The range of services provided by VA’s nonhospital-based clinics varies as well. Some nonhospital-based clinics, like the one in Orlando, may provide services almost comparable to those provided by the medical center or comprehensive center. Other centers, however, offer services on a more limited basis. For example, the nonhospital-based clinic associated with one of the medical centers we visited only offers gynecological services once a week. According to the women veterans coordinator, the average waiting time to get a gynecology appointment at this clinic is 51 days. She explained that if the situation is urgent, arrangements are made to have the patient seen in the urgent care clinic or at the medical center. Variation in services at VA medical facilities may be attributable to one or more factors, such as medical center management’s views on the level of services needed, funding, staffing, and demand for services. The specific services offered and the manner in which they are delivered within VA facilities are left to the discretion of medical center or VISN management. Most VA facilities did not receive additional funding to establish health care programs for women and had to provide these additional services while maintaining or minimally affecting existing programs. Initially, VHA provided additional funding for the comprehensive centers, which was supplemented by funds from the medical center’s budget. VHA also provided some additional funding in 1994 to help VA facilities obtain resources to counsel women veterans who had been sexually traumatized. The women veterans coordinators at the five medical center locations we visited told us that the medical center directors have a strong commitment to providing quality health care to women veterans and that without such support, it would be difficult to meet women veterans’ needs or improve the women’s health program. Some women’s programs had to be established and operated using the medical center’s existing funding and resources, which included no provisions for these services. Although the Tampa and Boston centers received VHA funding to establish a comprehensive health center, they still had to obtain additional funding from the medical center, which required management’s support. The availability of gender-specific services may also be influenced by the demand for these services. At two locations we visited, the women veterans coordinators told us that when they first opened their women’s clinics, they operated on a very limited scale—one-half to 1 day a week. However, the demand was so overwhelming that they increased their operations to 5 days a week. On the other hand, the women veterans population in some areas is small and may not generate a high enough demand for gender-specific services to provide them in a separate women veterans’ health care program or within the medical center on a full-time basis. In such instances or if a very small number of female veterans have historically availed themselves of the services, it may not be cost-effective to provide these services in-house, as pointed out by VA’s OIG in 1993.Instead, it may be appropriate to contract out for these services. In the 1990s, women veterans’ utilization of gender-specific services has increased significantly. Outpatient and inpatient visits among women veterans at VA facilities increased more than 50 percent between fiscal years 1994 and 1997. Based on VA’s survey of its medical facilities, the number of women veterans receiving gender-specific services increased about 42 percent from more than 85,000 to almost 121,200 during the same period. (See table 2.) Between fiscal years 1994 and 1997, the number of pap smears and mammograms provided to women veterans increased dramatically. In fiscal year 1997, almost 53,000 women veterans received pap smears, a 63-percent increase over fiscal year 1994. Similarly, in fiscal year 1997, about 36,400 women veterans received mammograms, a 47-percent increase over fiscal year 1994. Reproductive health care services, which cover the entire range of gynecological services, were provided to over 31,800 women veterans in fiscal year 1997, 12 percent more than in fiscal year 1994. According to VA, the pap smear and mammography examination rates among appropriate and consenting women veterans in 1997 are 90 percent and 87 percent, respectively. VA has set goals to increase the mammography and pap smear examination rates from their current base rates to 92 percent and 90 percent, respectively, by fiscal year 2003. Women veterans have also used more health care services in general, consistent with VA’s goal to meet women veterans’ total health care needs. With the exception of inpatient care, the number of women veterans who use VA health care services and the frequency of their usage continue to increase. For the 5-year period between fiscal years 1992 and 1997, the women veteran population increased only slightly, from about 1.2 million to 1.23 million. However, between fiscal years 1994 and 1997, the number of women veterans who received outpatient care increased 32 percent, from about 90,000 to more than 119,000, and the total number of outpatient visits increased 57 percent, from nearly 870,000 to over 1.3 million. (See table 3.) During this same period, the number of women veterans who received inpatient care decreased about 5 percent, from about 14,350 to 13,700, which is consistent with VA’s—and the nation’s—current health care trend to deliver services in the least costly, most appropriate setting. VA’s health care program for women veterans has made important strides in the last few years. VA has made good progress informing women veterans about their eligibility for services and the services available, assisting women veterans in accessing the system, correcting patient privacy deficiencies, and increasing health care services for women veterans. Most importantly, VA’s efforts are reflected in the increased availability of services and utilization by women veterans. While progress has been made, the importance of sustaining efforts to address the special needs of women veterans will only increase, as their percentage of the total veteran population is projected to double by 2010. Coincident with these demographic changes, VA is making changes to the way it delivers health care, including integrating and consolidating facilities while maintaining quality of care and implementing eligibility reform. VA will need to be especially vigilant to ensure that women veterans’ needs are appropriately addressed as it implements these overall changes. In its comments on a draft of this report, VA agreed with our findings that progress has been made in serving women veterans through the Women Veterans’ Health Program but that additional work is required to improve outreach to women, rectify privacy issues, and improve inpatient environments for women undergoing inpatient psychiatric treatment. VA also provided some technical comments, which we have incorporated as appropriate. VA’s comments are included as appendix II. Copies of this report are being sent to the Secretary of Veterans Affairs, other appropriate congressional committees, and interested parties. We will also make copies available to others on request. If you have any questions about the report, please call me or Shelia Drake, Assistant Director, at (202) 512-7101. Jacquelyn Clinton, Evaluator-in-Charge, was a major contributor to this report. To determine the barriers to women veterans obtaining care within VA, we talked with officials in the Center for Women Veterans, within the Office of the Secretary; VHA; two VBA regional offices; and Readjustment Counseling Centers (Vet Centers) in Tampa, Florida; St. Petersburg, Florida; and New Orleans, Louisiana. We also reviewed Women Veterans Advisory Committee reports and talked with women veterans and VA program officials in five medical centers: Bay Pines, Florida; Boston, Massachusetts; Tampa; New Orleans; and Washington, D.C. These medical centers were selected because they offered different levels of health care services to women veterans. To determine the availability and use of gender-specific care, we discussed women veterans’ health care services with officials at VA’s Central Office and the five medical centers we visited. We reviewed VA medical centers’ women veterans health care plans, relevant VA policy directives, and women veterans health care utilization data. We also reviewed quality assurance plans, annual reports, minutes of Women Veterans Advisory Committee meetings, outreach materials, and other written documentation and materials. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the status of the Department of Veterans Affairs' (VA) health care program for women, focusing on: (1) the progress VA made in removing barriers that may prevent women veterans from obtaining VA health care services; and (2) the extent to which VA health care services, particularly gender-specific services, are available to and used by women veterans. GAO noted that: (1) VA has made considerable progress in removing barriers that prevent women veterans from obtaining care; (2) VA has increased outreach to women veterans to inform them of their eligibility for health care services and designated women veterans coordinators to assist women veterans in accessing VA's health care system; (3) VA has also improved the health care environment in many of its medical facilities, especially with respect to accommodating the privacy needs of women veterans; (4) however, VA recognizes that it has more working these areas and plans to address concerns about the effectiveness of its outreach efforts and privacy barriers that still exist in some facilities; (5) in response to women veterans' concerns, VA has begun to assess its capacity to women veterans; (6) with regard to gender-specific services, VA's efforts to emphasize women veterans' health care have contributed to a significant increas of all services over the last 3 years; (7) the range of services differs by facility; services may be provided in clinics designated specifically for women veterans, or they may be provided in the overall medical facility health care system; (8) more importantly, utilization has increased significantly between 1994 and 1997; (9) for example, gender-specific services grew from over 85,000 to more than 121,000; and (10) during the same time period, the number of women veterans treated for all health care services on an outpatient basis increased by about 32 percent or 119,300. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Our work on Customs’ efforts to interdict drugs has focused on four distinct areas: (1) internal controls over Customs’ low-risk cargo entry programs; (2) the missions, resources, and performance measures for Customs’ aviation program; (3) the development of a specific technology for detecting drugs; and (4) Customs drug intelligence capabilities. In July 1998, at the request of Senator Dianne Feinstein, we reported on Customs’ drug-enforcement operations along the Southwest border of the United States. Our review focused on low-risk, cargo entry programs in use at three ports—Otay Mesa, California; Laredo, Texas; and Nogales, Arizona. To balance the facilitation of trade through ports with the interdiction of illegal drugs being smuggled into the United States, Customs initiated and encouraged its ports to use several programs to identify and separate low-risk shipments from those with apparently higher smuggling risk. One such program is the Line Release Program, designed to expedite cargo shipments that Customs determined to be repetitive, high volume, and low risk for narcotics smuggling. The Line Release Program was first implemented on the Northern border in 1986 and was expanded to most posts along the Southwest border by 1989. This program requires importers, brokers (companies who process the paperwork required to import merchandise), and manufacturers to apply for the program and to be screened by Customs to ensure that they have no past history of narcotics smuggling and that their prior shipments have been in compliance with trade laws and Customs’ commercial importing regulations. In 1996, Customs implemented the Land Border Carrier Initiative Program, which required that the Line Release shipments across the Southwest border be transported by Customs-approved carriers and driven by Customs-approved drivers. After the Carrier Initiative Program was implemented, the number of Southwest Border Line Release shipments dropped significantly. At each of the three ports we visited, we identified internal control weaknesses in one or more of the processes used to screen Line Release applicants for entry into the program. These weaknesses included (1) an absence of specific criteria for determining applicant eligibility at two of the three ports, (2) incomplete documentation of the screening and review of applicants at two of the three ports, and (3) lack of documentation of supervisory review for aspects of the applicant approval process. During our review, Customs representatives from northern and southern land- border cargo ports approved draft Line Release volume and compliance eligibility criteria for program applicants and draft recertification standards for program participants. The Three Tier Targeting Program—a method of targeting high-risk shipments for narcotics inspection—was used at the three Southwest border ports that we visited. According to officials at the three ports, they lost confidence in the program’s ability to distinguish high- from low-risk shipment because of two operational problems. First, there was little information available in any database for researching foreign manufacturers. Second, local officials doubted the reliability of the designations. They cited examples of narcotics seizures from shipments designated as “low-risk” and the lack of a significant number of seizures from shipments designated as “high-risk.” Customs suspended this program until more reliable information is developed for classifying low- risk importations. One low-risk entry program—the Automated Targeting System—was being pilot tested at Laredo. It was designed to enable port officials to identify and direct inspectional attention to high-risk shipments. That is, the Automated Targeting System was designed to assess shipment entry information for known smuggling indicators and thus enable inspectors to target high-risk shipments more efficiently. Customs is evaluating the Automated Targeting System for expansion to other land-border cargo ports. In September 1998, we reported on Customs’ aviation program missions, resources, and performance measures. Since the establishment of the Customs Aviation Program in 1969, its basic mandate to use air assets to counter the drug smuggling threat has not changed. Originally, the program had two principal missions: border interdiction of drugs being smuggled by plane into the United law enforcement support to other Customs offices as well as other federal, state, and local law enforcement agencies. In 1993, the administration instituted a new policy to control drugs coming from South and Central America. Because Customs aircraft were to be used to help carry out this policy, foreign counterdrug operations became a third principal mission for the aviation program. Since then, the program has devoted about 25 percent of its resources to the border interdiction mission, 25 percent to foreign counterdrug operations, and 50 percent to other law enforcement support. Customs Aviation Program funding decreased from about $195 million in fiscal year 1992, to about $135 million in fiscal year 1997—that is, about 31 percent in constant or inflation-adjusted dollars. While available funds decreased, operations and maintenance costs per aircraft flight hour increased. Customs Aviation Program officials said that this increase in costs was one of the reasons they were flying fewer hours each year. From fiscal year 1993 to fiscal year 1997, the total number of flight hours for all missions decreased by over one-third, from about 45,000 hours to about 29,000 hours. The size of Customs’ fleet dropped in fiscal year 1994, when Customs took 19 surveillance aircraft out of service because of funding reductions. The fleet has remained at about 114 since then. The number of Customs Aviation Program onboard personnel decreased, from a high of 956 in fiscal year 1992 to 745 by the end of fiscal year 1997. Customs has been using traditional law enforcement measures to evaluate the aviation program (e.g., number of seizures, weight of drugs seized, number of arrests). These measures, however, are used to track activity, not measure results or effectiveness. Until 1997, Customs also used an air threat index as an indicator of its effectiveness in detecting illegal air traffic. However, Customs has discontinued use of this indicator, as well as some other performance measures, because Customs determined that they were not good measures of results and effectiveness. Having recognized that these measures were not providing adequate insights into whether the program was producing desired results, Customs said it is developing new performance measures in order to better measure results. However, its budget submission for fiscal year 2000 contained no new performance measures. The pulsed fast neutron analysis (PFNA) inspection system is designed to directly and automatically detect and measure the presence of specific materials (e.g., cocaine) by exposing their constituent chemical elements to short bursts of subatomic particles called neutrons. Customs and other federal agencies are considering whether to continue to invest in the development and fielding of this technology. The Chairman and the Ranking Minority Member of the Subcommittee on Treasury and General Government, Senate Committee on Appropriations, asked us to provide information about (1) the status of plans for field testing a PFNA system and (2) federal agency and vendor views on the operational viability of such a system. We issued the report responding to this request on April 13, 1999. Customs, the Department of Defense (DOD), the Federal Aviation Administration (FAA), and Ancore Corporation—the inspection system inventor—recently began planning to field test PFNA. Because they were in the early stage of planning, they did not expect the actual field test to begin until mid to late 1999 at the earliest. Generally speaking, agency and vendor officials estimated that a field test covering Customs’ and DOD’s requirements will cost at least $5 million and that the cost could reach $8 million if FAA’s requirements are included in the joint test. Customs officials told us that they are working closely with the applicable congressional committees and subcommittees to decide whether Customs can help fund the field test, particularly given the no-federal-cost language of Senate Report 105-251. In general, a complete field test would include (1) preparing a test site and constructing an appropriate facility; (2) making any needed modifications to the only existing PFNA system and its components; (3) disassembling, shipping, and reassembling the system at the test site; and (4) conducting an operational test for about 4 months. According to agency and Ancore officials, the test site candidates are two seaports in California (Long Beach and Oakland) and two land ports in El Paso, Texas. Federal agency and vendor views on the operational viability of PFNA vary. While Customs, DOD, and FAA officials acknowledge that laboratory testing has proven the technical feasibility of PFNA, they told us that the current Ancore inspection system would not meet their operational requirements. Among their other concerns, Customs, DOD, and FAA officials said that a PFNA system not only is too expensive (about $10 million to acquire per system), but also is too large for operational use in most ports of entry or other sites. Accordingly, these agencies question the value of further testing. Ancore disputes these arguments, believes it can produce an operationally cost-effective system, and is proposing that a PFNA system be tested at a port of entry. The Office of National Drug Control Policy has characterized neutron interrogation as an “emerging” or future technology that has shown promise in laboratory testing and thus warrants field testing to provide a more informed basis for deciding whether PFNA has operational merit. At the request of the Subcommittee on National Security, International Affairs and Criminal Justice, House Committee on Government Reform and Oversight, in June 1998 we identified the organizations that collect and/or produce counterdrug intelligence, the role of these organizations, the federal funding they receive, and the number of personnel that support this function. We noted that more than 20 federal or federally funded organizations, including Customs, spread across 5 cabinet-level departments and 2 cabinet-level organizations, have a principal role in collecting or producing counterdrug intelligence. Together, these organizations collect domestic and foreign counterdrug intelligence information using human, electronic, photographic, and other technical means. Unclassified information reported to us by counterdrug intelligence organizations shows that over $295 million was spent for counterdrug intelligence activities during fiscal year 1997 and that more than 1,400 federal personnel were engaged in these activities. The Departments of Justice, the Treasury, and Defense accounted for over 90 percent of the money spent and personnel involved. Customs spent over $14 million in 1997 on counterdrug intelligence, and it is estimated that 63 percent of its 309 intelligence research specialists’ duties involved counterdrug intelligence matters. Among its many missions, Customs is the lead agency for interdicting drugs being smuggled into the United States and its territories by land, sea, or air. Customs’ primary counterdrug intelligence mission is to support its own drug enforcement elements (i.e., inspectors and investigators) in their interdiction and investigation efforts. Customs is responsible for producing tactical, operational, and strategic intelligence concerning drug-smuggling individuals, organizations, transportation networks, and patterns and trends. In addition to providing these products to its own drug enforcement elements, Customs is to provide this information to other agencies with drug enforcement or intelligence responsibilities. Customs is also responsible for analyzing the intelligence community’s reports and integrating them with its own intelligence. Customs’ in-house collection capability is heavily weighted toward human intelligence, which comes largely from inspectors and investigators who obtain information during their normal interdiction and investigation activities. In 1998, we reported on selected aspects of the Customs Service’s process for determining its need for inspectional personnel—such as inspectors and canine enforcement officers—for the commercial cargo or land and sea passengers at all of its 301 ports. Customs officials were not aware of any formal agencywide efforts prior to 1995 to determine the need for additional cargo or passenger inspectional personnel for its 301 ports. However, in preparation for its fiscal year 1997 budget request and a new drug enforcement operation called Hard Line,Customs conducted a formal needs assessment. The needs assessment considered (1) fully staffing all inspectional booths and (2) balancing enforcement efforts with the need to move complying cargo and passengers quickly through the ports. Customs conducted two subsequent assessments for fiscal years 1998 and 1999. These assessments considered the number and location of drug seizures and the perceived threat of drug smuggling, including the use of rail cars to smuggle drugs. However, all these assessments were focused exclusively on the need for additional personnel to implement Hard Line and similar initiatives, limited to land ports along the Southwest border and certain sea and air ports considered to be at risk from drug smuggling, conducted each year using generally different assessment factors, and conducted with varying degrees of involvement by Customs’ headquarters and field units. We concluded that these limitations could prevent Customs from accurately estimating the need for inspectional personnel and then allocating them to ports. We further concluded that, for Customs to implement the Results Act successfully, it had to determine its needs for inspectional personnel for all of its operations and ensure that available personnel are allocated where they are needed most. We recommended that Customs establish an inspectional personnel needs assessment and allocation process, and Customs is now in the process of responding to that April 1998 recommendation. Customs has awarded a contract for the development of a resource allocation model, and Customs officials told us that the model was delivered in March 1999 and that they are in the early stages of deciding how to use the model and implement a formal needs assessment system. Under the Results Act, executive agencies are to develop strategic plans in which they, among other things, define their missions, establish results- oriented goals, and identify strategies they plan to use to achieve those goals. In addition, agencies are to submit annual performance plans covering the program activities set out in the agencies’ budgets (a practice which began with plans for fiscal year 1999); these plans are to describe the results the agencies expect to achieve with the requested resources and indicate the progress the agency expects to make during the year in achieving its strategic goals. The strategic plan developed by the Customs Service addressed the six requirements of the Results Act. Concerning the elements required, the mission statement was results oriented and covered Customs’ principal statutory mission—ensuring that all goods and persons entering and exiting the United States do so in compliance with all U.S. laws and regulations. The plan’s goals and objectives covered Customs’ major functions—processing cargo and passengers entering and cargo leaving the United States. The plan discussed the strategies by which Customs hopes to achieve its goals. The strategic plan discussed, in very general terms, how it related to annual performance plans. The plan discussed some key factors, external to Customs and beyond its control, that could significantly affect achievement of the strategic goals, such as the level of cooperation of other countries in reducing the supply of narcotics. Customs’ strategic plan also contained a listing of program evaluations used to prepare the plan and provided a schedule of evaluations to be conducted in each of the functional areas. In addition to the required elements, Customs’ plan discussed the management challenges it was facing in carrying out its core functions, including information and technology, finance, and human resources management. However, the plan did not adequately recognize Customs’ need to improve financial management and internal control systems, controls over seized assets, plans to alleviate Year 2000 problems, and plans to improve computer security. We reported that these weaknesses could affect the reliability of Customs’ performance data. Further, our initial review of Customs’ fiscal year 2000 performance plan showed that it is substantially unchanged in format from the one presented for 1999. Although the plan is a very useful document for decisionmakers, it still does not recognize Customs’ need to improve its internal control systems, control over seized assets, or plans to improve computer security. You asked us to comment on the performance measures proposed by Customs, which are to assess whether Customs is achieving its goals. Customs has included 26 performance measures in its fiscal year 2000 performance plan. These measures range from general information on the level of compliance of the trade community with trade laws and Customs’ regulations (which Customs has traditionally used) to very complex measures, such as transportation costs of drug smuggling organizations. Many of these complex measures were still being developed by Customs when the fiscal year 2000 performance plan was issued. In addition, Customs did not include performance targets for 8 of the 26 measures in its fiscal year 2000 plan. Computing Crisis: Customs Has Established Effective Year 2000 Program Controls (GAO/AIMD-99-37, Mar. 29, 1999). responsible for ranges from 1 to 37. The first action plan was issued in February 1999 and has since been updated three times. According to the plan, it is Customs’ intention to implement all action items included in the plan by 2000. Customs’ Director for Planning is to manage and monitor the plan on an ongoing basis. He told us that items are usually added at the behest of the Commissioner. The Management Inspection Division (part of the Office of Internal Affairs) is responsible for verifying and validating the items that have been reported as completed, including determining whether the action taken was effective. The action plan of May 7—the latest version available—shows that 91 of the 203 items had been completed; 110 were ongoing, pending, or scheduled; and 2 had no description of their status. Overall, use of this kind of management tool can be very helpful in communicating problems and proposed solutions to executives, managers, and the Customs Service workforce, as well as to other groups interested in Customs such as this Committee and us. Mr. Chairman, this completes my statement. I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed efforts by the Customs Service to interdict drugs, allocate inspectional personnel, and develop performance measures, including information on Customs' action plan for resolving management problems. GAO noted that: (1) Customs initiated and encouraged its ports to use several programs to identify and separate low-risk shipments from those with apparently higher smuggling risk; (2) GAO identified internal control weaknesses in one or more of the processes used to screen Line Release program applicants for entry into the program; (3) the Three Tier Targeting program was used at the Southwest border ports where officials say they lost confidence in the program's ability to distinguish high- from low-risk shipments; (4) Customs is evaluating the Automated Targeting System for expansion to other land-border cargo ports; (5) Customs has been using traditional law enforcement measures to evaluate the Aviation program; (6) these measures, however, are used to track activity, not measure results or effectiveness; (7) Customs has discontinued the use of the threat index as an indicator of its effectiveness in detecting illegal air traffic, as well as some other performance measures, because Customs determined that they were not good measures of results and effectiveness; (8) Customs, Department of Defense (DOD), Federal Aviation Administration (FAA), and Ancore Corporation recently began planning to field test the pulsed fast neutron analysis (PFNA) inspection system; (9) while Customs, DOD, and FAA officials acknowledge that laboratory testing has proven the technical feasibility of PFNA, they told GAO that the Ancore inspection system would not meet their operational requirements; (10) agency officials said that a PFNA system not only is too expensive, but also is too large for operational use in most ports of entry or other sites; (11) Customs officials were not aware of any formal agencywide efforts prior to 1995 to determine the need for additional cargo or passenger inspectional personnel for its 301 ports; (12) in preparation for its fiscal year 1997 budget request, Customs conducted a formal needs assessment; (13) GAO concluded that the assessments had limitations that could prevent Customs from accurately estimating the need for inspectional personnel and then allocating them to ports; (14) GAO found that Customs' strategic plan contained weaknesses that could affect the reliability of Customs' performance data; (15) Customs' first action plan was issued in February 1999 and has since been updated three times; (16) it is Customs' intention to implement all action items included in the plan by 2000; and (17) use of this kind of management tool can be very helpful in communicating problems and proposed solutions to executives, managers, and the Customs Service workforce. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The most familiar part of USPS’s retail network is the post office. In fiscal year 2015, there were approximately 26,600 post offices across the country, largely unchanged from fiscal year 2005 (see fig. 1). Post offices are a key part of USPS’s revenue stream—accounting for about 56 percent of USPS’s total retail revenue of about $19 billion in fiscal year 2015. Prior to the introduction of POStPlan, post offices were each managed by postmasters. USPS also uses other facilities to provide key services, such as selling stamps. Over the past decade, the USPS workforce has declined and changed in composition, but continues to account for almost 80 percent of USPS’s total operating costs ($58 of $74 billion in fiscal year 2015). From fiscal years 2005 to 2015, USPS’s workforce decreased from 803,000 to approximately 622,000 employees, or by about 23 percent (see fig. 2). During this period, career employees decreased (from approximately 704,700 to 491,900 or by about 30 percent), while non-career employees increased (from approximately 98,300 to 130,000 or by about 32 percent). Career positions—which are generally full time but also may be part- time—are eligible for annual and sick leave, health insurance, life insurance, and retirement benefits. Non-career employees supplement the career workforce and receive lower wages. They are not eligible for life insurance or retirement benefits, but some are eligible for specified types of health insurance upon hiring while others are eligible after serving at least 1 year. About 90 percent of USPS’s career employees—and some types of non- career employees, such as Postal Support Employees—are covered by collective bargaining agreements and represented through unions. APWU, one of USPS’s largest unions, represents over 200,000 USPS employees in the clerk, maintenance, motor vehicle, and support services employee “crafts.” The USPS-APWU 2010-2015 Collective Bargaining Agreement (CBA) contains various provisions that specify rules associated with the performance of bargaining-unit work (such as staffing the retail window and placing mail in customers’ post office boxes) by USPS employees. For example, the agreement specifies that USPS should assign new or revised positions that contain non-supervisory duties to the most appropriate employee craft and that USPS should consult with APWU before doing so. Two associations represent USPS’s postmasters, who are not covered by CBAs: NAPUS and NLPM. USPS is required to consult with these associations on planning, developing, and implementing certain programs and policies—like POStPlan—that affect them. In May 2012, USPS announced the POStPlan initiative. POStPlan sought to right-size USPS’s retail network of—at the time—26,703 post offices. Generally, POStPlan had two elements: reduce retail window service hours at some offices to better match actual customer use, and change the staffing arrangements at those offices to reduce labor costs. According to USPS officials, they informed APWU of POStPlan in May 2012, after announcing the initiative. To evaluate which offices may be appropriate for hour reductions, in December 2011, USPS analyzed the daily workload—as a proxy for customer use—at 17,728 offices. Through this analysis, USPS determined that it could reduce hours at 13,167 of these offices from 8- to 2-, 4-, or 6-hours of retail service a day. Post offices are classified into “levels” and, under POStPlan, these reduced-hour offices would be classified into a new set of levels that correspond with the number of hours of retail service they would provide per day (i.e., Level 2, Level 4, and Level 6). USPS also determined that the remaining 4,561 offices it analyzed should continue to provide 8 hours of retail service a day; USPS classified these offices as Level 18 offices. USPS planned for most of the reduced-hour offices to be managed remotely. That is, under POStPlan, Level 2, 4, and 6 offices would be considered “remotely managed post offices” (RMPO) and they would report to a postmaster at a Level 18 or above “administrative” post office. USPS created an exception for offices it considered especially isolated. These offices would not be remotely managed and would, instead, be called “part time post offices” (PTPO); all PTPOs would be Level 6 offices. According to USPS officials, Level 2, 4, and 6 RMPOs and PTPOs are the “POStPlan post offices;” Level 18 or above offices are not considered POStPlan post offices. USPS plans to review workloads at POStPlan RMPOs annually and, based on these reviews, may increase or decrease the number of hours of retail service at these offices. USPS also plans to review the workload at the Level 18 and above offices through USPS’s separate, pre- POStPlan processes, and based on the results, USPS may designate any qualifying office a POStPlan post office and reduce its hours accordingly if its workload justifies a reduction in hours. Regarding the staffing arrangements at these offices, USPS planned to replace career postmasters in the POStPlan post offices with less costly non-career or part-time employees, as shown in fig. 3. Level 18 offices would continue to be staffed by career, full-time postmasters. On July 9, 2012, APWU filed a labor grievance claiming the changes introduced by POStPlan violated provisions of the USPS-APWU 2010- 2015 CBA. USPS officials said they had the authority to modify the POStPlan initiative during the grievance procedure but decided to proceed with POStPlan implementation because they believed it was the proper operational decision for its customers, employees, and USPS. As a result, USPS continued with POStPlan implementation until September 2014, when—as discussed later in this report—an independent arbitrator issued a decision that resolved the grievance. Prior to the issuance of the POStPlan arbitration decision in September 2014, USPS had taken steps to reduce hours at almost three-quarters of POStPlan post offices. After announcing POStPlan in May 2012, USPS began implementation by reviewing its determinations on: (1) which offices would have reduced hours, (2) which were considered especially isolated, (3) which would be reclassified as Level 18, and (4) which would become administrative offices. In July 2012, USPS finalized those decisions and communicated the results to relevant field personnel, who had the opportunity to advise on any potential concerns that could not be identified at the USPS headquarters level. In September 2012, USPS began surveying residents of the affected communities to give them an opportunity to provide input before reducing their office’s hours. The survey asked whether they preferred USPS continue with its plan to reduce hours or whether they preferred USPS close their office and institute alternatives, such as relocating post office box service to a nearby office. In October 2012, USPS began holding meetings in the communities to communicate the survey results and consider feedback. Thereafter, USPS continued to conduct meetings and reduce hours at offices on a rolling basis, with the first reductions occurring in November 2012 and most occurring within the first year of POStPlan’s announcement (see fig. 4). Specifically, from November 2012 through August 2014, USPS reduced hours at 9,159 post offices, or at about 72 percent of the almost 12,800 that would ultimately have hours reduced under POStPlan. According to USPS officials, they implemented POStPlan on a rolling basis to make building modifications to some offices (to ensure that customers could maintain access to their post office box even with reduced hours) and to minimize the effect on POStPlan-affected postmasters. For example, implementing POStPlan on a rolling basis allowed affected postmasters more time to find reassignment opportunities, as described below. In addition to reducing hours at over 9,000 of the POStPlan post offices, USPS simultaneously took steps to make the necessary staffing changes and provide options for postmasters to separate from USPS or be reassigned to other positions ahead of a planned “reduction in force” (RIF). USPS announced a $20,000 separation incentive offer for all postmasters in May 2012, followed by a $10,000 offer in July 2014 to those POStPlan-affected postmasters who did not accept the first incentive offer. In May 2012, USPS also began periodically posting vacancies that POStPlan-affected postmasters could apply to, such as positions that became available as postmasters retired through the May 2012 separation incentive. Postmasters in offices set to become Level 6 offices could also opt to remain in their office and accept a demotion to the new, part-time position. According to USPS officials, as postmasters separated from USPS or accepted reassignments, USPS filled the positions according to its new POStPlan staffing arrangements. USPS initially intended to complete POStPlan implementation by September 2014, with any POStPlan-affected postmasters who had not separated from USPS or been reassigned to an alternate position as of this date to be separated via RIF. However, USPS extended this deadline twice during implementation—first to January then February of 2015—in order to, according to USPS officials, find reassignment opportunities for as many POStPlan-affected postmasters as possible. By September 2014, about 4,100 POStPlan-affected postmasters had separated from USPS and about 5,800 had been reassigned to a different position. In July 2012, USPS estimated it would achieve $516 million annually in labor cost savings once POStPlan had been fully implemented for a complete year (that is, once retail hours had been adjusted in all POStPlan post offices). Given that USPS originally intended to complete implementation by September 2014, this means the program would have been implemented for a complete year in September 2015, with full annual cost savings beginning in fiscal year 2016. To develop this estimate, USPS calculated “before POStPlan” and “after POStPlan” labor costs at the approximately 13,000 POStPlan post offices and at the Level 18 offices using average salary and benefits data as of pay period 6 of fiscal year 2012. To arrive at the “before POStPlan” labor cost, USPS multiplied the number of post offices at each applicable, pre- POStPlan office level by the average salary and benefits that career postmasters at those levels earn, then totaled the results. To arrive at the “after POStPlan” labor cost, USPS multiplied the number of offices at each post-POStPlan office level by the projected salary and benefits it expected for employees that would staff those offices (based on the new POStPlan staffing arrangements) then totaled the results. The $516 million represents the difference between these “before” and “after” calculations. In June 2015, USPS revised this original estimate to $518 million in annual labor cost savings based on: (1) the actual savings it estimated it achieved from fiscal years 2012 to 2014, (2) the remaining savings it expected to achieve from offices whose hours had been reduced in the prior year, and (3) the savings it expected to achieve from offices whose hours had not yet been reduced. On September 5, 2014, an impartial arbitrator resolved APWU’s POStPlan grievance and ruled that the staffing changes introduced by POStPlan violated certain provisions of the USPS-APWU 2010-2015 CBA, and that USPS must reverse several of these changes. The arbitrator agreed with APWU’s argument that, under POStPlan, employees in Level 4 and 6 RMPOs were no longer performing any managerial or supervisory work and also that the work was clerical in nature and should be assigned to bargaining-unit employees. As a result, according to USPS officials, the arbitration decision significantly changed staffing in these offices, which account for about 82 percent of POStPlan post offices as of August 2015, by awarding all non-bargaining- unit positions in them to APWU-represented employees. The arbitrator’s decision on staffing in Level 4 RMPOs also affected the resolution of a separate dispute. Specifically, in the POStPlan arbitration decision, the arbitrator also ruled on a dispute regarding the type of work assignments that staff in Level 18 offices could perform, finding certain Level 18 offices must be staffed by a career employee (see fig. 5). USPS continued to modify hours at POStPlan post offices as these changes were taking place. According to USPS officials, subsequent memorandums of understanding between USPS and APWU mitigated some of what the officials believe could have been potentially negative effects of the arbitration decision. According to USPS officials as of February 2016, staffing changes related to POStPlan and the arbitration decision are complete. USPS, NAPUS, and NLPM officials told us that managing employee work rules under the post-arbitration staffing arrangements is more complex than under the original POStPlan staffing arrangements. They noted that this is because each employee category has different work rules to manage and there were fewer employee categories under the original POStPlan staffing arrangements. USPS estimated that, due to the arbitration decision, annual POStPlan cost savings will be lower than originally expected. Specifically, in June 2015, USPS estimated that the decision will reduce estimated annual cost savings by $181 million, which is approximately 35 percent less than the revised estimate of $518 million. As a result, USPS projected that POStPlan will now result in total annual labor cost savings of about $337 million. To develop the estimate of the impact from the arbitration decision, USPS used a slightly different approach than it had used to develop its original cost-savings estimate. Specifically, USPS calculated the difference between the hourly salary and benefit rates for employees in the Level 4 and 6 POStPlan post offices under the original, pre-arbitration POStPlan staffing arrangements and under the post-arbitration POStPlan staffing arrangements. It then multiplied the rate differences by the total hours worked per year at the applicable offices and totaled the results. This resulted in a difference of $181 million. USPS then subtracted the $181 million from the $518 million in annual savings it expected to achieve to arrive at the revised estimated annual savings of $337 million. According to USPS officials, USPS developed this estimate using a different approach from its original POStPlan cost-savings estimate because the arbitration decision resulted in a new labor type and rate and USPS believed this was the most logical method to factor in the arbitrator’s decision. USPS attributes the reduced cost savings to the higher compensation employees receive in the POStPlan post offices under the post-arbitration decision staffing arrangements relative to the compensation these employees would have received under the original, pre-arbitration, staffing arrangements, as shown in fig. 6. USPS officials told us that while the arbitration decision reduced the cost savings it expected to achieve, POStPlan was still the correct operational decision for USPS and its stakeholders. We reviewed USPS’s 2012 original POStPlan cost-savings estimate and 2015 estimate of the arbitration decision’s impact on cost savings and found that while POStPlan most likely resulted in some cost savings, the estimates have limitations that affect their reliability. Specifically, the limitations include: (1) imprecise and incomplete labor costs, including errors in the underlying data that affect the accuracy of calculations of actual savings achieved; (2) lack of a sensitivity review; and (3) the exclusion of other factors that would be necessary to consider the net cost savings of the POStPlan initiative, particularly the potential impact of reduced hours on retail revenue. Our guidance on assessing data reliability states that reliable data, which include estimates and projections, can be characterized as being accurate, valid, and complete. For example, accurate data appropriately reflect the actual underlying information, valid data actually represent what is being measured, and complete data appropriately include all relevant information. Data should also be consistent, a subset of accuracy. Consistency can be impaired when there is an inconsistent interpretation of what data should be entered. Internal control standards adopted by USPS also state that program managers and decision makers need complete and accurate data to determine whether they are meeting their goals, and that they should use quality information to make informed decisions and evaluate an entity’s performance in achieving key objectives and addressing risks. These standards also note that the ability to generate quality information begins with the data used. While USPS’s original estimate of the savings it expected to achieve from POStPlan clearly states that it accounts for labor costs only, we found that the salary and benefits information that USPS used to calculate these labor costs was imprecise, and this imprecision contributes to inaccuracies in the estimate. For example: When calculating the “before POStPlan” labor costs, USPS used average postmaster salaries and benefits and, when calculating the “after POStPlan” costs, sometimes used the salary and benefits of newly hired postmasters and in other instances used the salary and benefits of incumbent postmasters. In a POStPlan advisory opinion, PRC noted that using an average postmaster salary is imprecise; that salaries at post offices vary, on average, by as much as $20,000 from the lowest to the highest salary; and that these variations can add up considerably when thousands of offices are considered. Although USPS used average postmaster salaries and benefits for the “before POStPlan” labor costs, approximately 3,100 of the post offices included in the calculation were not being staffed by postmasters. These offices were being staffed by other types of employees, such as non-postmasters designated as “Officers in Charge,” whose salaries were generally lower. In the POStPlan advisory opinion, PRC estimated that if it assumed salaries at these offices were at a level more representative of these other types of employees, the annual cost savings would be $386 million, not $516 million. minimum salary for that grade, a difference of as much as $25,000. In the POStPlan advisory opinion, PRC explained that this may have overstated these costs and estimated that if these assumptions were corrected, the annual cost savings would be $704 million, not $516 million. USPS included about 100 post offices that were actually closed or suspended in its calculation of labor costs despite stating that suspended offices were not part of POStPlan, that it would not re-visit closed offices’ status, and that there were no plans to reopen these offices. In its POStPlan advisory opinion, PRC estimated that the cost savings would be $513 million, not $516 million, if USPS excluded these offices. Similar to the original POStPlan cost-savings estimate, USPS’s estimate of the arbitration decision’s impact on cost savings has limitations related to imprecise labor costs, which, as noted above, contribute to inaccuracies. For example: USPS used a single, proxy employee category and hourly rate to represent all employees under the pre-arbitration POStPlan staffing arrangements, rather than the actual different rates these employees would have received, as described above. USPS used this proxy although it had the actual rates, and none of the actual rates matched the proxy rate. USPS included all Level 6 post offices and their associated positions’ labor costs in its estimate. However, the arbitration decision did not affect the Level 6 PTPOs. This is inconsistent with how USPS treated Level 2 RMPOs in the estimate. These RMPOs were also not affected by the arbitration decision. Removing the Level 6 PTPOs from the estimate reduces the impact from about $181 million to about $170 million, meaning the revised savings would have been $348, not $337, million. USPS’s post-arbitration decision estimate of $337 million in expected annual cost savings relies, in part, on USPS calculations of actual savings achieved due to POStPlan, but the accuracy of these actual savings calculations may be limited by errors in the underlying salaries and benefits data used to develop them. As described above, to arrive at $337 million, USPS subtracted the $181-million impact it calculated from the revised estimate of $518 million it developed in June 2015. Also as noted above, USPS developed that $518 million estimate in part by considering the actual savings it achieved from fiscal years 2012 to 2014. However, we found errors in USPS’s salaries and benefits data that, according to USPS officials as of March 2016, may have been caused by employees’ workhours being incorrectly recorded when employees worked in more than one office. We found that these errors would result in some offices’ salaries and benefits being understated, and others being overstated. While understated and overstated costs at individual offices would likely offset each other in aggregate (i.e., when costs at all offices, either POStPlan or non-POStPlan, were considered), they do not offset when analyzing costs at just POStPlan post offices. Given that according to USPS, its calculations of actual savings achieved consider costs at POStPlan—but not non-POStPlan— offices, the calculations may be limited by these errors. Additionally, according to USPS as of October 2015, thus far it has saved $306 million in labor costs from fiscal year 2012 to June 2015 as a result of POStPlan. Although POStPlan most likely resulted in cost savings because of the overall reduction in work hours at thousands of post offices, the accuracy of these calculated savings may also be limited by these errors. USPS’s calculation of labor costs in both its original and post-arbitration decision estimates was also incomplete. A full estimate of labor costs might have included additional labor cost elements. For example: USPS’s original estimate did not include costs associated with the addition of supervisors at the Level 18 or above offices that remotely manage the POStPlan post offices due to their increased supervisory workload. Specifically, according to USPS officials, USPS added about 320 such positions, though not all as a result of POStPlan, and the average hourly pay for supervisors as of August 2015 was $48.73. USPS’s original estimate did not include one-time labor costs associated with separation incentives USPS offered to postmasters. According to USPS officials, acceptance of these separation incentives by POStPlan-affected postmasters cost USPS about $69 million. USPS’s estimate of the arbitration decision’s impact on cost savings excluded the potential cost impact of staffing changes in Level 18 post offices. Although USPS officials have stated that Level 18 offices are not part of POStPlan, the arbitration decision and a September 2014 memorandum of understanding that further implemented it required that a certain type of position staffing Level 18 offices be changed to a bargaining-unit clerk position. Our cost-estimating best practices state that sensitivity analysis should generally be conducted when estimating costs, especially if changes in key assumptions would likely have a significant effect on the estimate. Sensitivity analyses identify a range of possible cost estimates by varying major assumptions, parameters, and inputs to enable an understanding of the impact altered assumptions have on estimated costs. This can also help managers and decisions makers identify risk areas and relevant program alternatives. Since uncertainty cannot be avoided, it is necessary to identify the elements that represent the most risk, which can be done through sensitivity analysis. In developing its estimates, USPS did not conduct a sensitivity analysis to determine what would happen to estimated costs and savings should key assumptions it was making under POStPlan vary. For example, USPS officials told us that they recognized the possibility that APWU would challenge the planned staffing arrangements at POStPlan post offices. Despite this statement, in its original cost-savings estimate, USPS did not analyze the sensitivity of POStPlan labor costs to alternative staffing arrangements that might have been more in line with APWU’s views on the staffing provisions specified in the USPS-APWU 2010-2015 CBA. USPS officials explained that they believed that savings associated with reduced hours at POStPlan post offices would significantly outweigh any reduction in savings should an arbitrator rule in APWU’s favor. Similarly, USPS did not analyze the sensitivity of its estimated savings to possible changes in the benefits offered to USPS employees. For example, when calculating the salary and benefits of Postmaster Reliefs (PMR)—the employees expected to staff Level 2 and 4 RMPOs—USPS assumed that the only benefit they were eligible for was 1 hour of annual leave for every 20 hours worked. However, in 2014, USPS began providing health coverage for PMRs who meet the requirements of the 2009 Affordable Care Act. Additionally, in both its estimates, USPS did not consider that staffing at offices may continue to change based on the workload re- evaluations it plans to conduct. For example, under the original POStPlan staffing arrangements, a Level 4 RMPO staffed by a PMR earning $14.87 per hour could become a Level 6 RMPO staffed by a part-time postmaster earning $21.17 per hour if, after a re-evaluation of the office’s workload, USPS determines that the office’s workload has increased enough to justify a Level 6 classification. Thus, the number of offices at each level might continue to increase or decrease year after year. This also means that although USPS refers to its estimates as estimates of the “annual” savings it will achieve upon full POStPlan implementation, only a single-year estimate of savings can be produced at any given time, unless and until estimates of potential staffing changes in future years can be made. OMB cost-estimating guidance states that agencies should determine whether an activity’s benefits (savings) also take into account the costs incurred to implement it. That is, the guidance suggests that it is the net benefit, or in this case, the net cost savings that should be considered. However, USPS’s estimate did not include certain factors that could affect the net cost savings of the POStPlan initiative. In particular, USPS’s original estimate did not include an analysis of the extent to which reduced hours at POStPlan post offices could affect revenue at those offices and across USPS. That is, it did not fully consider any offsetting financial losses that should be weighed against estimated savings. In July 2012, USPS testified to PRC that it did not anticipate losing revenue due to POStPlan, though it had not conducted a financial analysis to support this statement. Specifically, as described below, USPS expected any revenue lost at POStPlan post offices to be absorbed elsewhere. Despite this assumption, in its POStPlan advisory opinion, PRC stated that it was concerned that reduced retail hours may lead to reduced revenue and recommended that USPS undertake a post- implementation review of POStPlan to measure changes in revenue at POStPlan post offices. In September 2015, we asked USPS what, if any, steps it had taken to address PRC’s recommendation. At that time, USPS had not yet taken steps to analyze changes in revenue at POStPlan post offices, though in January 2014—in response to a request from PRC— USPS submitted data to PRC on the fiscal year 2013 revenue earned in POStPlan post offices and in the Level 18 and above administrative post offices. USPS officials told us that they planned to conduct a revenue analysis annually, comparing fiscal year over fiscal year, and later provided us with a preliminary analysis of changes from fiscal years 2014 to 2015. USPS’s preliminary POStPlan revenue analysis has limitations that may affect its representation of changes in revenue at POStPlan post offices and across USPS. This analysis showed that walk-in revenue declined by about 4 percent at POStPlan post offices, as well as at non-POStPlan offices, and at all offices in general. However, we found that USPS’s calculation of revenue in POStPlan post offices was inconsistent with its definition of what constitutes POStPlan post offices. Specifically, USPS included revenue from the Level 18 or above administrative offices, though USPS does not define these as POStPlan post offices. Additionally, according to USPS officials, those are the offices most likely to absorb customers who are looking for nearby alternatives in the face of reduced hours at their local office. USPS also excluded the Level 6 PTPOs from its analysis although it considers these to be POStPlan post offices. After we inquired about the Level 6 PTPOs, USPS provided us with a revised analysis but, in this revision, USPS included the Level 18 and above administrative offices as POStPlan post offices. When we re- sorted the offices in USPS’s analysis to exclude the Level 18 and above administrative offices from the “POStPlan post offices” category and include the Level 6 PTPOs in the “POStPlan post offices” category, we found that revenue declined by about 10 percent, not 4 percent in POStPlan post offices and by about 4 percent in non-POStPlan post offices. To obtain a more comprehensive picture of how POStPlan may have affected revenue in the reduced-hour offices, we also analyzed the walk- in revenue earned at POStPlan post offices, by office level, for the most recent fiscal year (2015) compared to the most recent fiscal year in which no POStPlan implementation activities had begun to occur (2011). We found that revenue at RMPOs in fiscal year 2015 was 29 percent lower than revenue, adjusted for inflation, in fiscal year 2011, with over a 50 percent decline in Level 2 RMPOs. See table 1. While our analysis shows that revenue at the POStPlan RMPOs declined by 29 percent, this revenue constituted a small portion of the total revenue from all of USPS’s post offices. In January and February of 2016, USPS conducted additional analysis comparing fiscal years 2011 and 2015 post office walk-in revenue. According to this analysis, revenue from RMPOs in fiscal year 2011 accounted for just 4.5 percent of approximately $11.9 billion in total revenue earned from post offices that year and, in fiscal year 2015, 3.7 percent of approximately $10.8 billion in total revenue. Additionally, USPS’s analysis showed that the Level 18 or above administrative offices experienced less of a decline in revenue than the RMPOs they remotely manage. Specifically, revenue at these offices in fiscal year 2011 was about $2.32 billion (adjusted for inflation) and, in fiscal year 2015, about $2.06 billion, a decline of about 11.2 percent. In its analysis, USPS also reported total revenue from all non-POStPlan offices. However, USPS’s reported total again included the Level 6 PTPOs in this category. Overall, revenue at all post offices declined by about 14.6 percent from fiscal years 2011 to 2015 when fiscal year 2011 revenue is adjusted for inflation. While both our and USPS’s analyses comparing fiscal year 2011 and 2015, and USPS’s analysis of changes from fiscal years 2014 to 2015 help to illustrate the potential effects of POStPlan on revenue, they do not fully measure it. In particular, analyzing the extent of revenue reductions that are independently due to POStPlan would require a more complex analysis that takes into account a variety of factors, and the USPS data available to us were not adequate to conduct such an analysis. For example, in addition to considering changes in revenue at POStPlan post offices by level, other factors need to also be considered, such as revenue changes in non-POStPlan offices and other retail channels within a reasonable distance to POStPlan offices, as well as at offices and channels not near POStPlan offices. Such an analysis would also need to consider other factors that may influence retail revenue over time. These factors could include, for example, the state of the general economy, the adoption of technology substitutes to traditional mail (such as e-mail, e- retail, and electronic bill payments), and relevant demographic characteristics that might affect mail volume, such as population density and household income. Such an analysis would also need to consider the movement of customer traffic to alternate ways of accessing postal services. For instance, in fiscal year 2015, about 46 percent of USPS’s total retail revenue of about $19 billion was generated through these alternate access channels, which include usps.com, self-service kiosks, and third-party retail partners. In the case of POStPlan, USPS officials explained that since revenue from POStPlan post offices accounts for a small portion of total post office revenue and cost reductions due to POStPlan were expected to be much larger, cost savings due to POStPlan would likely outweigh lost revenue. However, analyzing the extent of revenue reductions that are independently due to POStPlan through a more complex analysis could be helpful in evaluating the overall impact of POStPlan if USPS expanded the initiative to additional post offices, as may occur due to the workload re-evaluations that USPS plans to conduct. Overall, USPS officials have acknowledged that their original POStPlan cost-savings estimate was not sophisticated—characterizing it as a rough estimate that used a “quick and dirty” approach—and have also acknowledged the limitations of their estimate of the arbitration decision's impact on cost savings. Prior to making any changes (like POStPlan) in the nature of postal services that are at least substantially nationwide in scope, USPS must request an advisory opinion from PRC on the change. USPS officials explained that this process entails a review of the proposed initiative by PRC and that when making their case before PRC, USPS’s legal counsel makes recommendations on strategy for the proceeding in consultation with other USPS staff. They further noted that in order to make an informed business decision prior to undertaking an initiative such as POStPlan, USPS undertakes reasonable efforts to appropriately assess the expected cost savings to determine whether the initiative is worth pursuing. The officials added that the nature and extent of this assessment varies by the specific circumstances, particularly, the financial circumstances facing USPS, the need for expedited implementation of an initiative, and USPS’s overall confidence that an initiative will prudently reduce costs. USPS officials stated that in cases such as POStPlan, there is no strict guidance or thresholds that govern when cost-savings estimates should be rigorous versus when it is sufficient to use a less rigorous approach to gain a rough approximation, and there is no legal requirement to produce cost-savings estimates or to use a particular methodology. Instead, USPS officials said these are judgmental decisions. Regarding USPS’s calculations of actual savings achieved, USPS officials have also acknowledged the limitations of the underlying salaries and benefits data. For example, USPS officials acknowledged that the errors we found in these data would result in some offices’ salaries and benefits being understated, and others being overstated. In February 2016, USPS officials told us that they were not previously aware of this issue and that they have begun to take steps to further understand the scope of the errors and how and why they occurred. As of March 14, 2016, USPS officials were continuing to assess this issue, but USPS’s time frame for identifying the scope and resolving the issue remains unclear, and it is also unclear if USPS subsequently intends to update its calculations of actual savings achieved. Regarding its analysis of changes in revenue from fiscal year 2014 to 2015, after reviewing our analysis of revenue at POStPlan post offices, USPS has also acknowledged that some PTPOs should have been included in its analysis and provided details on why it included these offices and the Level 18 and above administrative offices in the categories that it did. In particular, USPS officials told us that they agreed that some of the PTPOs should have been included in their analysis as POStPlan post offices and explained that they had included these offices in their analysis as non-POStPlan offices because this type of office existed prior to POStPlan. They also noted that they included the Level 18 and above administrative offices as POStPlan post offices because, as noted above, those would be the offices most likely to absorb customers who are looking for nearby alternatives in the face of reduced hours at their local post office. USPS officials also said that it is important to note that revenue declines at POStPlan post offices may not be fully lost to USPS because customers may use other nearby retail channels (e.g., the Level 18 or above offices, usps.com, etc.) instead. While we agree that ultimately, it is the revenue lost to USPS as a whole that is most relevant to USPS, it is still important to accurately represent the changes in revenue at the reduced-hour offices to fully understand the effects of POStPlan on these offices and the trade-offs necessary between costs and benefits, and to provide relevant information for program evaluation and future decision making. We have long reported that USPS needs to restructure its operations to better reflect customers’ changing use of the mail and to align its costs with revenues. Toward this end, USPS has proposed or started a number of initiatives, such as POStPlan, to increase efficiency and reduce costs as it seeks to improve its financial viability. Having reliable data and quality methods for calculating the potential savings USPS expects to achieve through these initiatives, the actual savings they achieve, and the potential effects they have on revenue are critical. Such rigor can help ensure that USPS officials and oversight bodies, such as PRC and Congress, have accurate and relevant information to help USPS strike the right balance between the costs and benefits of the various initiatives. Although POStPlan was an initiative that affected about 66 percent of USPS’s post offices and postmasters, USPS did not produce cost- savings estimates with the level of rigor that an initiative with such a large footprint may have warranted. Having reliable estimates of expected cost savings when initially making decisions could help ensure that USPS is achieving its goals, yet USPS’s estimates of expected savings had limitations. For example, by not conducting a sensitivity analysis, as recommended by our cost-estimating guidance, USPS may have missed an opportunity to test how vulnerable its expected cost savings were to program changes. For instance, USPS may have been able to test how its expected savings would change should any of its assumptions change, as some later did because of the arbitration decision, which affected staffing arrangements at the majority of POStPlan post offices. If USPS had noticed significant differences in its projected labor costs and savings through a sensitivity analysis, it might have taken steps to address these vulnerabilities prior to announcing POStPlan. USPS believes that, given likely savings and the realities of postal operations, moving forward with POStPlan was the correct operational decision. However, for future initiatives like POStPlan, having guidance that clarifies when USPS should develop cost-savings estimates using a rigorous approach could help ensure that USPS produces estimates that thoroughly consider the scope of a program’s implications, effects, and alternatives. Such an approach is particularly relevant given that USPS has projected unsustainable losses through fiscal year 2020 and beyond, may continue to develop efficiency and cost-savings initiatives, and will need quality information on the potential savings and effects associated with these initiatives. Further, according to USPS as of October 2015, it has saved $306 million in labor costs from fiscal year 2012 to June 2015 as a result of POStPlan. While we recognize that POStPlan most likely resulted in some cost savings, the accuracy of USPS’s calculation of savings may be limited by errors we found in USPS’s salaries and benefits data, and thus, it is unclear whether USPS may have actually saved more or less. USPS’s time frames for assessing and resolving this issue—and whether it intends to, subsequently, update its calculations of actual savings achieved—are also unclear. Finally, in its estimates of expected savings, USPS did not initially consider the effect that reduced retail hours may have on revenue and thus did not calculate an estimate of net cost savings. This means USPS had an incomplete picture of the effects of POStPlan. Even the preliminary analysis of changes in revenue that USPS later conducted was limited because it was not consistent with USPS’s definition of what constitutes POStPlan post offices. Improving the quality of future POStPlan revenue analyses, especially as the program potentially expands to additional offices, could help USPS better understand the implications of POStPlan and inform future decision- making as USPS conducts workload re-evaluations of post offices. The Postmaster General should direct executive leaders to: establish guidance that clarifies when USPS should develop cost- savings estimates using a rigorous approach that includes, for example, a sensitivity analysis and consideration of other factors that could affect net costs and savings, versus when it is sufficient to develop a rough estimate; continue to take steps to assess and resolve the salaries and benefits data errors and, subsequently, update calculations of actual cost savings achieved due to POStPlan as appropriate; and verify that calculations of changes in revenue at POStPlan post offices in USPS’s revenue analyses are consistent with USPS’s definition of POStPlan post offices and take steps to consider when it may be appropriate to develop an approach for these analyses that will allow USPS to more fully consider the effects of POStPlan on retail revenue across USPS. We provided a draft of this report to PRC and USPS for their review and comment. PRC provided comments in an e-mail and stated that it found the report accurately reflects PRC’s advisory opinion and actions regarding POStPlan. USPS provided a written response, which is reproduced in appendix II of this report. In the written response, USPS disagreed with the overall tone and title of our report, provided observations on our recommendations but did not state whether it agreed or disagreed with them, and disagreed with some of the specific examples we use in our report. Regarding the tone and title of our report, in its response USPS reported that it does not see a basis for any conclusion other than that, with POStPlan, it is saving substantial amounts from the reduction in work hours and the use of lower cost labor. It further stated that POStPlan was a reasonable initiative in light of declining mail transactions and the need to right-size its infrastructure to support the retail needs of the country. Finally, USPS said that it believes POStPlan was and remains a prudent business decision. Our report does not comment directly on the reasonableness of the POStPlan initiative or whether it was a prudent business decision, but we note in our report that USPS believed POStPlan was a proper operational decision for USPS and its stakeholders. Instead, our report focuses on USPS’s estimates of savings due to POStPlan. We do not disagree that POStPlan most likely resulted in some savings due to reduced work hours and have clarified our report to state such. However, as we mention in the report, USPS’s calculations of the actual savings achieved may be limited by errors in USPS’s salaries and benefits data, and thus, USPS may have understated or overstated the amount it has saved. We also revised the title of the report in response to USPS’s concern. Regarding our first recommendation that USPS establish guidance that clarifies when USPS should develop cost-savings estimates using a rigorous approach versus when it is sufficient to develop a rough estimate, USPS said that it performed the level of analysis necessary to support the decision to move forward with POStPlan and that there is not a concrete set of business rules that determine the level of analysis that should be conducted. Instead, USPS noted that its management intends to be guided by a variety of factors, on a case-by-case basis. These factors include: (1) the cost associated with the development of rigorous financial information, (2) whether savings are the sole factor motivating the decision, and (3) the amount of time that must be committed to performing detailed analysis, among other things. USPS added that decisions based on more complex operational changes and risk may require more detailed analysis. While we appreciate that there is value to considering the types of analyses to perform on a case-by-case basis, the factors that USPS lists in its written response are precisely the type of factors that could be included (or expanded upon) in guidance that clarifies how to make those case-by-case decisions. Additionally, as we note in our report, we believe such guidance will be helpful to USPS and its oversight bodies as it considers future initiatives. As such, we continue to believe our recommendation is appropriate. Regarding our second recommendation that USPS continue to assess and resolve errors in its salaries and benefits data and, as appropriate, update its calculations of actual savings achieved due to POStPlan, USPS said that it did not rely on this type of data in its original estimate of expected cost savings. We recognize that USPS did not rely on these data in that estimate. Instead, our report mentions that such data affected USPS’s post-arbitration decision estimate of expected savings and were used to calculate actual savings achieved thus far. Regarding the latter, USPS noted in its written response that due to system limitations, it cannot change past, existing data, but that it will continue to identify and rectify the causes of the data anomalies. USPS also noted that as more detailed information may be necessary in the future, it is reviewing possible future system or process improvement opportunities. These are positive steps to ensure that USPS is addressing these data issues and reviewing opportunities for future improvements. Regarding our third recommendation that USPS (1) verify that calculations of changes in revenue at POStPlan post offices in its revenue analyses are consistent with USPS’s definition of POStPlan post offices and (2) take steps to consider when it may be appropriate to develop an approach that more fully considers the effects of POStPlan on revenue across USPS, USPS did not directly address either part of this recommendation. Instead, USPS provided information on revenue at POStPlan post offices in 2011 and 2015 (such as the portion of total walk- in revenue these offices constituted), much of which is included in our report. USPS also re-iterated that it expected revenue would shift from POStPlan post offices to the Level 18 and above offices that remotely manage the POStPlan offices, and noted that USPS’s revenue analysis supports that assumption. The intent of our recommendation was not to disagree with this assumption. Rather, the intent of our recommendation is to help ensure that USPS and its oversight bodies have quality information on the changes in revenue at POStPlan post offices in order to fully understand the effects of POStPlan. Key to having such information is ensuring that the calculations of changes in revenue are consistent with USPS’s definition of what constitutes a “POStPlan post office.” As such, we continue to believe that verifying the accuracy of its calculations is important. Additionally, our report acknowledges the small portion of total walk-in revenue that POStPlan post offices constitute, and notes that a more complex analysis could be helpful if USPS expanded the initiative to additional offices, as may occur due to the workload re- evaluations that USPS plans to conduct. We therefore continue to believe that USPS should take steps to consider at what point such an analysis may be warranted. Finally, USPS disagreed with some of the specific examples we use in our report. In particular: USPS disagreed with an example showing that its original cost- savings estimate was incomplete due to the omission of costs associated with separation incentives offered to postmasters, noting that “annualized savings” estimates are generally not reduced by such start-up costs. We do not disagree that annualized savings are one way to measure cost savings. However, as we note in our report, OMB cost-estimating guidance states that agencies should also take into account the costs incurred to implement an activity, suggesting that it is the net cost savings that should be considered. As such, a fully complete cost-savings estimate would consider such start-up costs. Similarly, USPS disagreed in another instance that showed the saved salary USPS authorized to postmasters contributed to the incompleteness of its original estimate and noted that these salary payments were not planned at the inception of the program. We have updated our report to reflect that these payments were not planned. Finally, USPS disagreed with statements showing that the change made to staffing in Level 18 post offices as a result of the POSPlan arbitration decision is tied to POStPlan, noting that this change was related to a separate grievance and that this separate grievance was specifically identified in a footnote in the POStPlan arbitration decision. We do not disagree with the idea that this change was a resolution of a separate grievance and that the footnote USPS refers to cites this separate grievance. However, we disagree that the change was not at all tied to POStPlan. The connection to POStPlan is clear in the arbitration decision’s wording. Specifically, in the arbitration decision, the arbitrator ruled that Level 4 RMPOs should be staffed by PSEs. When stating its ruling regarding the staffing change in Level 18 offices, the arbitration decision clearly states, “In view of the increased use of PSEs in Level 4 RMPOs …. I further order that all Level 18 post offices that are currently staffed by PSEs with the designation code 81-8 will now be staffed with a career employee.” Therefore, it is clear that changes in staffing at Level 4 RMPOs (which were part of POStPlan) also affected the resolution of this separate dispute. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, the Acting Chairman of PRC, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the actions the U.S. Postal Service (USPS) took to implement the Post Office Structure Plan (POStPlan) before the September 2014 arbitration decision and the savings USPS estimated POStPlan would achieve, (2) the effect USPS determined the arbitration decision had on POStPlan staffing and cost savings, and (3) whether USPS’s POStPlan cost-savings estimates are reliable and any limitations of the estimates. To describe the POStPlan initiative, determine the actions USPS took to implement it before the September 2014 arbitration decision and identify the effects USPS determined the decision had on POStPlan staffing, we reviewed relevant laws, regulations, documentation and data, and conducted interviews. Specifically, we reviewed USPS guidance, policies, procedures, and other documents related to POStPlan planning and implementation, such as fact sheets, employee notification letters, and information submitted during the Postal Regulatory Commission’s (PRC) 2012 POStPlan proceeding. We reviewed USPS’s 2014 and 2015 annual reports to Congress and 2013 Five-Year Business Plan. We also reviewed documentation related to the arbitration in particular, such as the arbitration decision, subsequent memorandums of understanding between USPS and the American Postal Workers Union (APWU) that further implemented the decision, and the 2010-2015 collective bargaining agreement between USPS and APWU. We obtained written responses and data from USPS officials on the arbitration decision and POStPlan implementation from 2012 to 2015, such as data on the number of post offices where USPS reduced hours from 2012 to 2015 and postmasters affected by POStPlan. We assessed the reliability of these data by comparing them to other information obtained from USPS and asking USPS questions about data sources, quality, and timeliness. We found these data reliable for the purpose of describing the progress and status of POStPlan before and after the arbitration decision. We also reviewed prior GAO reports and documentation from USPS stakeholders, including PRC and USPS’s two postmaster associations—the National Association of Postmasters of the United States (NAPUS) and the National League of Postmasters of the United States (NLPM). For example, we reviewed PRC’s advisory opinion on POStPlan and the transcript of PRC’s POStPlan hearing, which it held on July 11, 2012. We selected NAPUS and NLPM due to their role as management associations that USPS must consult with and because they represent POStPlan-affected postmasters. We selected PRC due its oversight role over USPS. We interviewed USPS officials and NAPUS, NLPM, and PRC officials to obtain additional information, views, and context on POStPlan. We also contacted APWU, but APWU officials did not accept our invitation for a meeting. To determine the cost savings USPS originally estimated it would achieve through POStPlan, the effect it estimated the arbitration decision had on savings, and the reliability and limitations of these estimates, we reviewed USPS’s POStPlan cost-savings estimates and compared the estimates to relevant criteria. Specifically, we reviewed USPS’s 2012 estimate of the savings it expected to achieve through POStPlan and its 2015 estimate of the arbitration decision’s impact on expected cost savings. We obtained USPS documentation and written responses related to POStPlan cost savings, interviewed USPS officials, and obtained documentation and interviewed officials from NAPUS, NLPM, and PRC to determine how USPS developed its estimates, the assumptions it used, the potential sources of uncertainty, the types of inputs included and omitted, and these stakeholders’ views. We then assessed the reliability and soundness of these estimates using guidance on assessing the reliability of data (which are defined as including estimates—such as estimates of cost savings—and projections), cost estimating guidance, and internal controls standards adopted by USPS to determine the extent to which the estimates comported with these criteria. We reviewed these standards and guidance and then selected those practices that, in our professional judgment, were most applicable given that POStPlan is an efficiency and cost-savings initiative and given USPS’s financial condition. In particular, we assessed the estimates’ accuracy, validity, completeness, and consistency; any use of sensitivity analyses; and consideration of net cost-savings factors. We discuss the limitations of the estimates in this report. We also obtained USPS data on actual cost savings achieved from fiscal year 2012 to June 2015 (the most recent data available at the time of our review) due to POStPlan, and hourly pay rates in POStPlan post offices under the pre- and post-arbitration decision POStPlan staffing arrangements. We assessed the reliability of these data by comparing them to other information obtained from USPS and asking USPS officials questions about data sources, quality, and timeliness, and, for the actual savings data, reviewing how consistently USPS’s data files followed the methodology USPS officials described to us. Regarding the actual savings data, we found that USPS’s data files when USPS first began tracking savings did not always follow the methodology USPS described to us. While USPS officials did not provide explanations for these inconsistencies, USPS updated its methodology for tracking POStPlan cost savings beginning in fiscal year 2015. However, we also found errors in the salaries and benefits data USPS used to calculate actual savings achieved; we discuss the limitations in this report. Regarding the hourly pay-rate data, we found these data reliable for the purpose of describing hourly pay rates in POStPlan post offices according to USPS. It was beyond the scope of our review to assess whether POStPlan was a prudent business decision. Finally, to better understand the potential effects of POStPlan and the arbitration decision, we analyzed (1) salaries and benefits paid, and (2) the walk-in revenue earned at POStPlan post offices, by post office level, for periods before and after POStPlan implementation. We used data provided by USPS, as follows: Salaries and benefits data: USPS provided us data on the salaries and benefits it paid to POStPlan employees in POStPlan post offices in the third quarter of fiscal year 2011 (i.e., April, May, and June 2011). According to USPS officials, these data represented all salaries and benefits paid to all relevant employees during that period. USPS provided us the same information for the third quarter of fiscal year 2015. To make the fiscal year 2011 data comparable to the fiscal year 2015 data, we adjusted the fiscal year 2011 salaries and benefits using adjustment factors provided by USPS officials. Revenue data: USPS provided us data on the revenue in POStPlan post offices in fiscal years 2011 and 2015. We adjusted fiscal year 2011 dollars using the Gross Domestic Product deflator so that they would be stated in 2015 dollars. Office level classification data: USPS provided us data on what level each POStPlan post office is classified as of October 2015 (i.e., whether it is a Level 2, 4, or 6 remotely managed post office (RMPO) or part time post office (PTPO)). Although USPS officials stated that these data provided included all POStPlan post offices, we found that they did not always include information for the same set of offices, and when providing these data, USPS officials did not provide explanations for why the number of POStPlan post offices differed. As such, regarding our revenue analysis, we excluded offices as necessary in order to have as complete a set of information as possible for as many offices as possible with what was provided. Specifically, of those offices for which we had level information, we excluded those for which we did not have revenue data for both periods. In particular, USPS’s data did not include complete information on revenue in both periods at the majority of the about 400 Level 6 PTPOs. Thus, we excluded the Level 6 PTPOs from our results. We also excluded one Level 6 RMPO for this reason. Additionally, we excluded four offices that had multiple level classifications. Of those four, three were classified as both Levels 4 and 18, and one was classified as both Levels 6 and 18. Despite these exclusions, we found these data reliable for the purpose of describing changes in revenue at POStPlan post offices. Regarding our salaries and benefits analysis, in analyzing USPS’s salaries and benefits data, we found that these data were not reliable due to errors in how USPS recorded the hours its employees worked. In addition to the individual named above, key contributors to this report were Derrick Collins (Assistant Director), Amy Abramowitz, Lilia Chaidez, William Colwell, Marcia Fernandez, SaraAnn Moessbauer, Nalylee Padilla, Malika Rice, Michelle Weathers, and Crystal Wesco. | USPS continues to experience a financial crisis and has undertaken many initiatives to reduce costs. In May 2012, USPS announced POStPlan, which aimed to reduce retail hours at post offices and use less costly labor. However, an arbitrator ruled in September 2014 that USPS must reverse several of these staffing changes. GAO was asked to review the arbitration decision's effects on POStPlan staffing and cost savings. GAO examined: (1) USPS's actions to implement POStPlan before the decision and expected savings, (2) the decision's effects on POStPlan's staffing and savings, and (3) whether USPS's POStPlan cost-savings estimates are reliable. GAO reviewed relevant POStPlan documentation and data; compared USPS's POStPlan cost-savings estimating process to GAO's data reliability and cost- estimating guidance and internal control standards adopted by USPS; and interviewed officials from USPS, its regulatory body, and postmaster associations. The U.S. Postal Service (USPS) had largely completed Post Office Structure Plan's (POStPlan) implementation prior to a 2014 POStPlan arbitration decision and expected millions in cost savings. Specifically, under POStPlan, USPS planned to reduce hours at about 13,000 post offices (from 8- to 2-, 4-, or 6-hours of retail service a day) and to staff them with employees less costly than postmasters. Prior to the arbitration decision, USPS had reduced hours at most of these offices and taken steps to make the staffing changes. For example, it replaced many career postmasters with non-career or part-time employees by offering separation incentives or reassignments. In July 2012, USPS estimated POStPlan would result in about $500 million in annual cost savings. USPS determined that, while the 2014 arbitration decision significantly affected planned staffing at POStPlan post offices and estimated savings, POStPlan was the correct operational decision for USPS and its stakeholders. The arbitrator ruled that many offices be staffed by bargaining-unit employees, such as clerks, rather than the generally less costly employees USPS had planned to use. As a result, USPS estimated in June 2015 that POStPlan would now result in annual savings of about $337 million or 35 percent less than the about $500 million it expected. USPS's original and post-arbitration decision estimates of expected POStPlan cost savings have limitations that affect their reliability. USPS officials noted that they do not have strict guidance on when a rough savings estimate is adequate versus when a more rigorous analysis is appropriate. Specific limitations include: imprecise and incomplete labor costs, including errors in underlying data; lack of a sensitivity review; and the exclusion of other factors that affect net cost savings, particularly the potential impact of reduced retail hours on revenue. For example, USPS's post-arbitration-decision estimate relies, in part, on its calculations of actual savings achieved due to POStPlan. While POStPlan most likely resulted in some savings, GAO found errors in the underlying salaries and benefits data used that may understate or overstate the amount of savings achieved. Additionally, while USPS later (i.e., after it developed its savings estimates) conducted analyses of changes in revenue, GAO found these analyses were limited because USPS's calculations of changes in revenue at POStPlan and non-POStPlan post offices were inconsistent with its definition of what constitutes a POStPlan office. As of March 2016, USPS was taking steps to understand the scope and origin of the errors in its salaries and benefits data, but its time frame for resolving the issue remains unclear, as does whether USPS subsequently intends to update its calculations of actual savings achieved. Internal control standards state that program managers and decision makers need quality data and information to determine whether they are meeting their goals. Without reliable data and quality methods for calculating the potential savings USPS expects to achieve through its initiatives, the actual savings they achieve, and the effects on revenue, USPS officials and oversight bodies may lack accurate and relevant information with which to make informed decisions regarding future cost-saving efforts in a time of constrained resources. To ensure that USPS has quality information regarding POStPlan, GAO recommends that USPS establish guidance that clarifies when to develop savings estimates using a rigorous approach; resolve errors in labor data and, as appropriate, recalculate actual savings achieved; and take steps to improve revenue analyses. USPS disagreed with some of GAO's findings but neither agreed or disagreed with the recommendations. GAO continues to believe its recommendations are valid as discussed further in this report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The private sector, driven by today’s globally competitive business environment, is faced with the challenge of improving its service while lowering costs. As a result, many companies have adopted innovative business practices to meet customer needs and retain profitability. Since DOD is facing a similar challenge of providing better service at a lower cost, it has begun to reexamine its business practices. With the end of the Cold War, the DOD logistics system must support a smaller, highly mobile, high technology force with fewer resources. Also, due to the pressures of budgetary limits and base closures, DOD must seek new and innovative ways to make logistics processes as efficient and effective as possible. To supply reparable parts for its approximately 4,900 aircraft, the Navy uses an extensive logistics system based on management concepts largely developed decades ago. The Navy’s system, commonly called a “pipeline,” consists of many activities that play a key role in providing aircraft parts to end-users when and where needed. This pipeline encompasses several functions, including the purchase, storage, distribution, and repair of parts. Another important function of this pipeline is to provide consumable parts (e.g., nuts, bearings, and fuses) that are used extensively to fix reparable parts and aircraft. The Defense Logistics Agency (DLA) provides most of the consumable parts that Navy repair activities need and handles a large part of the warehousing and distribution of reparable parts. Although not as large as the Navy, commercial airlines have similar operating characteristics to the Navy. They maintain fleets of aircraft that use reparable parts and operate logistics pipelines whose activities are similar. For both the Navy and commercial airlines, time plays a crucial role in the responsiveness of logistics operations and the amount of inventory needed. Pipeline complexity also adds to logistics costs by increasing overhead and adding to pipeline times. Condensing and simplifying pipeline operations, therefore, simultaneously improves responsiveness and decreases costs by reducing inventory requirements and eliminating infrastructure (warehouses, people, etc.) needed to manage unnecessary material. The Navy’s overall inventory management philosophy is one of maintaining large inventory levels at many different locations to ensure parts are readily available to meet customers’ needs. As of September 1995, the Navy had reparable inventory valued at $10.4 billion. However, a portion of this inventory is not needed to support daily operations and war reserves. Of the $10.4 billion inventory, the Navy classifies $1.9 billion (18 percent) as long supply—a term denoting that more stock is on hand than is needed to meet daily operations and war reserve requirements.The $10.4-billion and the $1.9-billion inventories were valued using DOD’s standard valuation methodology—reparables requiring repair were reduced by the estimated cost of repair and excess inventory was valued at salvage prices (2.5 percent of latest acquisition cost). Figure 1 details the Navy’s allocation of its inventory to daily operations, war reserves, and long supply. The inventory turnover rate is a measure of how efficiently a business uses its inventory investment and can be expressed as the ratio of the dollar value of repairs to the average inventory value. One commercial airline we visited calculated that, using this ratio, it would turn its reparable inventory over once every 5 months. In comparison, we calculate that, based on fiscal year 1995 repairs, the Navy’s wholesale-level inventory of reparable parts would turn over once every 2 years. The Navy incurs significant costs to manage this large inventory investment. At the wholesale level alone, the Navy estimates it spent almost $1.8 billion to repair, buy, and manage reparable parts during fiscal year 1995 (see table 1). This amount does not include the costs to store and maintain parts at operating locations, such as bases and aircraft carriers. Despite the billions of dollars invested in inventory, the Navy’s logistics system is still often unable to provide spare parts when and where needed. During fiscal year 1995, Navy aircraft were not mission capable 11.9 percent of the time because spare parts were not available to repair the aircraft (see fig. 2). One reason parts were not available was that the Navy’s system often does not provide timely deliveries of parts. The Navy reported that, between October 1994 and June 1995, parts were not immediately available to mechanics at operating locations 25 percent of the time for reparable parts and 43 percent for consumable parts. When a part is not available, an end-user requisitions the part from the wholesale supply system. According to the Navy’s data, the length of time from requisition to delivery of a part takes, on average, 16 days to operating bases and 32 days to aircraft carriers. If the Navy’s wholesale system does not have the item in stock (32 percent of the time for reparable parts), the Navy places the item on backorder. According to the Navy’s data, customers wait over 2.5 months, on average, to receive backordered items. The Navy reported that, as of June 1995, it had more than 31,000 backorders for reparable parts, worth about $831 million. The delay in receiving parts often forces mechanics to cannibalize parts (removing parts from one aircraft to make repairs on another). Between July 1994 and June 1995, the Navy reported that its mechanics at operating bases and on aircraft carriers cannibalized parts at least 70,500 times. This practice is inefficient because the mechanics have to remove a working part from one aircraft and then install the part on a different aircraft. According to Navy guidance, cannibalization is a symptom of a failure somewhere in the logistics system, but, in some instances, can be a viable management tool in keeping aircraft operational. Aircraft squadron officials at several locations we visited, however, told us that cannibalizing parts is a routine practice because the Navy’s system does not consistently provide replacement parts on a dependable basis. The Navy’s large inventory costs and slow customer service are the result of several factors, but the largest contributor is a slow and complex repair pipeline. According to Navy officials, about 75 percent of component repairs are relatively minor in nature and can be done by maintenance personnel at the operating bases. They also stated that, when a part requires more complex and extensive repair (about 25 percent of the time), the process can create as many as 16 time-consuming steps as parts move through the repair pipeline (see fig. 3). Component parts can accumulate at each step in the process, which increases the total number of parts that are needed to meet customer demands and to ensure a continuous flow of parts. Tracking parts through each of the 16 steps listed in figure 3, we estimate, using the Navy’s flow time data, that it can take about 4 months, on average, from the time a broken part is removed from an aircraft until the time it is ready for reissue. Flightline As figure 3 illustrates, a broken part can pass through a number of base- and wholesale-level steps. At the base level, after a mechanic removes a broken part from an aircraft, the item is routed through base maintenance. If the part cannot be repaired at the base, it is then sent to a wholesale storage location, where it sits until scheduled for repair. Once scheduled, it is inducted into repair workshops and fixed, then sent to storage or used to fill a customer’s order. The Navy reported that over 190,000 parts were fixed through this process during fiscal year 1995 at a cost of about $957 million. While the repair pipeline time can take as long as 4 months, on average, it could be significantly longer because it does not include the time parts sit in wholesale storage awaiting repair. The Navy does not measure this step in the process; however, this time could be substantial. For example, the Navy does not promptly forward items to repair workshops after they break. Also, because the Navy schedules most repairs quarterly, many broken items could sit in storage for several months before being repaired. Parts may also sit in storage because many broken items in the Navy’s system are not needed to support daily operations or war reserves. Of the portions of the pipeline that are measured, the time spent receiving and repairing items at repair facilities accounts for the largest amount of pipeline time. Shown in figure 3 as “repair facility receiving” and “repair workshops,” these activities take an average of 73 days to complete. In examining the repair process at two repair facilities, we found that parts can be routed through several different workshops, thereby increasing the time to complete repairs. Functions such as testing, cleaning, machining, and final assembly are sometimes done at different locations at the repair facility. As a result, parts could be handled, packaged, and transported several times throughout the repair process. According to Navy officials, this is a common practice at the Navy’s repair facilities. At one repair facility, we examined 10 frequently repaired pneumatic and hydraulic components and found that about 85 percent of the repair time needed for these parts involved activities such as unpacking, handling, and routing the part to different workshops. The remaining 15 percent of the time was spent on the actual repair of the items. One item we examined had a repair time of 232 hours. However, only 20 hours was needed to actually repair the item; the remaining 212 hours involved time to handle and move the part to different locations. In addition to delays caused by routing parts to different locations, mechanics often do not have the necessary consumable parts (nuts, bolts, bearings, fuses, etc.) that are used in large quantities to repair parts. According to Navy officials, having the necessary consumable parts is another important factor affecting the timely repair of components. The Navy calculates that the lack of parts adds as much as 4 weeks to the average repair time. As of February 1996, the Navy had 11,753 reparable aircraft parts, valued at $486 million, in storage because parts were not available during the repair process to complete repairs. These items, which had been packaged and moved to a warehouse next to the repair facility, had been in storage for an average of 9 months. Figure 4 shows aircraft components awaiting parts in a warehouse at the Navy’s repair depot at Cherry Point, North Carolina. The Navy’s data indicates that DOD’s distribution and transportation system is slow in moving material among storage, repair, and end-user facilities and is another factor adding to the length of the repair pipeline. For example, with the current system, it takes an average of 16 days for a customer to receive a part at an operating base after a requisition is placed. As of June 1995, the Navy estimated that over one-half of this time involved DLA’s retrieval of the part from the warehouse and shipment of the part to the customer. In recognition of a changing global threat, increasing budgetary pressures, and the need for improvements to logistics system responsiveness, the Navy has recently undertaken three primary initiatives aimed at streamlining logistics operations. These initiatives are the regionalization of supply management and maintenance functions, privatization and outsourcing, and logistics response time reductions. The Navy is in the early stages of developing these initiatives and has not yet identified many of the specific business practices that it will use to achieve its goals. We have not reviewed the feasibility of these initiatives. However, we believe the initiatives provide a framework for improvements by focusing on the speed and complexity of the logistics pipeline. Under its regional supply initiative, the Navy is consolidating certain supply operations that are managed by a number of organizations under regionally managed supply centers. For example, naval bases, aviation repair depots, and shipyards each have supply organizations to manage their parts needs. These activities often use different information systems and business practices and their own personnel and facilities. Under the new process, one supply center in each of seven geographic regions will centrally manage the spare parts for these individual operations, with the objective of improving parts’ visibility and reducing the overhead expenses associated with separate management functions. The Navy also hopes this approach will lead to better sharing of inventory between locations, thus allowing it to reduce inventories. The Navy is not consolidating inventories into fewer storage locations; however, it is transferring data and management functions to the centers. Similarly, maintenance activities, such as base-level repair operations and depot-level repair operations, are managed by different organizations. As a result, maintenance capabilities, personnel, and facilities may be unnecessarily duplicated. Under the regional maintenance initiative, the Navy is identifying these redundant maintenance capabilities and consolidating these operations into regionally based repair facilities. For example, in one region, the Navy is consolidating 32 locations used to calibrate maintenance test equipment into 4 locations. The Navy believes that, by eliminating the fragmented management approach to supply management and maintenance, it can decrease infrastructure costs by reducing redundancies and eliminating excess capacity. The Navy also believes that by moving away from highly decentralized operations, it will be better positioned to improve and streamline operations Navy-wide. Both initiatives are in the early phases, however, so broad-based improvements have not yet occurred. The Navy also has an initiative to outsource and privatize functions. This initiative encompasses a broad spectrum of Navy activities, and possible outsourcing of functions within the reparable parts pipeline is only one aspect of this effort. Within the pipeline, the Navy has identified several material management functions, such as cataloging of items and overseas warehousing operations, as potential candidates for outsourcing. In January 1996, the Navy began developing cost analyses to determine whether contracting these functions out would be beneficial. Navy officials told us that they did not know when analyses on all candidates would be completed. One official said, however, that some candidates may be outsourced in 1997 at the earliest. The Navy expects other activities to be targeted for outsourcing in the future. According to Navy officials, those candidates will be identified as the Navy’s initiatives to streamline and improve operations progress. The objective of this initiative is to reduce the amount of time it takes a customer, such as a mechanic, to receive a part after placing an order. This initiative takes into account the series of processes that contribute to ensuring customers get the parts they need. These processes include placing and processing orders; storing, transporting, and distributing inventory; and repairing broken items. The Office of the Secretary of Defense (OSD) has established responsiveness goals that the Navy and other services are encouraged to meet. OSD wants to reduce the time it takes to fill a customer’s order from wholesale stock to 5 days by September 1996 and to 3 days by September 1998. OSD also wants to reduce the average backorder age to 30 days by October 2001. The Navy hopes to achieve these goals by looking at the pipeline as a whole and improving processes where needed. To identify and carry out improvements, the Navy has established a Logistics Response Time team, consisting of representatives from across the Navy and from DLA. Thus far, the team has focused primarily on collecting the data needed to accurately measure pipeline performance. In the spring of 1996, the team expects to begin identifying areas where process improvements should be applied to achieve the biggest gains in performance. This work will then be used to identify specific practices for carrying out these improvements. The airline industry has developed leading-edge practices that focus on reducing the time and complexity associated with logistics operations. We identified four best practices in the airline industry that have the potential for use in the Navy’s system. These practices have resulted in significant improvements and reduced logistics costs, especially for British Airways. These practices include the prompt repair of items, the reorganization of the repair process, the establishment of partnerships with key suppliers, and the use of third-party logistics services. When used together, they can help maximize a company’s inventory investment, decrease inventory levels, and provide a more flexible repair capability. In our opinion, they address many of the same problems the Navy faces and represent practices that could be applied to Navy operations. These practices appear particularly suited to Navy facilities that repair aircraft and components, such as repair depots and operating bases. Certain airlines begin repairing items as quickly as possible, which prevents the broken items from sitting idle for extended periods. Minimizing idle time helps reduce inventories because it lessens the need for extra “cushions” of inventory to cover operations while parts are out of service. In addition, repairing items promptly promotes flexible scheduling and production practices, enabling maintenance operations to respond more quickly as repair needs arise. Prompt repair involves inducting parts into maintenance shops soon after broken items arrive at repair facilities. Prompt repair does not mean that all parts are fixed, however. The goal is to quickly fix only those parts that are needed. One airline that uses this approach routes broken items directly to holding areas next to repair shops, rather than to stand-alone warehouses, so that mechanics can quickly access broken parts when it comes time for repair. These holding areas also give mechanics better visibility of any backlog. It is difficult to specifically quantify the benefits of repairing items promptly because it is often used with other practices to speed up pipeline processes. One airline official said, however, that his airline has kept inventory investment down partly because it does not allow broken parts to sit idle. In addition, the Air Force found through a series of demonstration projects that prompt repair, when used with other practices, could enable operations to be sustained with significantly fewer parts. For example, the Air Force reported in February 1995 that after the new practices were put in place at one location, 52 percent ($56.3 million) of the items involved in the test were potentially excess. The Air Force tested the new practices as part of its Lean Logistics program, which aims to improve Air Force logistics operations. One approach to simplify the repair process is the “cellular” concept. This concept brings all the resources, such as tooling and support equipment, personnel, and inventory, that are needed to repair a broken part into one location, or one “cell.” This approach simplifies the flow of parts by eliminating the time-consuming exercise of routing parts to workshops in different locations. It also ensures that mechanics have the technical support so that operations run smoothly. In addition, because inventory is placed near workshops, mechanics have quick access to the parts they need to complete repairs more quickly. British Airways adopted the cellular approach after determining that parts could be repaired as much as 10 times faster using this concept. Another airline that adopted this approach in its engine-blade repair shop was able to reduce repair time by 50 to 60 percent and decrease work-in-process inventory by 60 percent. Figure 5 shows a repair cell used in British Airways maintenance center at Heathrow Airport. Several airlines and manufacturers have worked with suppliers to improve parts support while reducing overall inventory. Two approaches—the use of local distribution centers and integrated supplier programs— specifically seek to improve the management and distribution of consumable items. These approaches help ensure that the consumable parts for repair and manufacturing operations are readily available, which prevents items from stalling in the repair process and is crucial in speeding up repair time. In addition, by improving management and distribution methods, such as using streamlined ordering and fast deliveries, these approaches enable firms to delay the purchase of inventory until a point that is closer to the time it is needed. Firms, therefore, can reduce their stocks of “just-in-case” inventory. Local distribution centers are supplier-operated facilities that are established near a customer’s operations and provide deliveries of parts within 24 hours. One airline that used this approach has worked with key suppliers to establish more than 30 centers near its major repair operations. These centers receive orders electronically and, in some cases, handle up to eight deliveries a day. Airline officials said that the ability to get parts quickly has contributed to repair time reductions. In addition, the officials said that the centers have helped the airline cut its on-hand supply of consumable items nearly in half. Integrated supplier programs involve shifting inventory management functions to suppliers. Under this arrangement, a supplier is responsible for monitoring parts usage and determining how much inventory is needed to maintain a sufficient supply. The supplier’s services are tailored to the customer’s requirements and can include placing a supplier representative in customer facilities to monitor supply bins at end-user locations, place orders, manage receipts, and restock bins. Other services can include 24-hour order-to-delivery times, quality inspection, parts kits, establishment of data interchange links and inventory bar coding, and vendor selection management. One manufacturer that used this approach received parts from its supplier within 24 hours of placing an order 98 percent of the time, which enabled it to reduce inventories for these items by $7.4 million—an 84-percent reduction. We have issued a series of reports on similar private sector practices that could be applied to DOD’s consumable inventories. These reports recommended new techniques that would minimize DOD’s role in storing and distributing consumable inventories. Companies, such as PPG Industries and Bethlehem Steel, have reduced consumable inventories by as much as 80 percent and saved millions in associated costs by using “supplier parks” and other techniques that give established commercial distribution networks the responsibility to manage, store, and distribute inventory on a frequent and regular basis to end-users. The airlines we contacted provided examples of how third-party logistics providers can be used to reduce costs and improve performance. Third-party firms take on responsibility for managing and carrying out certain logistics functions, such as storage and distribution. Outsourcing these tasks enables companies to reduce overhead costs because it eliminates the need to maintain personnel, facilities, and other resources that are required to do these functions in-house. It also helps companies improve various aspects of their operations because third-party providers can offer expertise that companies often do not have the time or the resources to develop. For example, one airline contracts with a third-party logistics provider to handle deliveries and pickups from suppliers and repair vendors, which has improved the reliability and speed of deliveries and reduced overall administrative costs. The airline receives most items within 5 days, which includes time-consuming customs delays, and is able to deliver most items to repair vendors in 3 days. In the past, deliveries took as long as 3 weeks. Third-party providers can also assume other functions. One third-party firm that we visited, for example, can assume warehousing and shipping responsibilities and provide rapid transportation to speed parts to end-users. The company can also pick up any broken parts from a customer and deliver them to the source of repair within 48 hours. In addition, this company maintains the data associated with warehousing and in-transit activities, offering real-time visibility of assets. The best practices that we observed in the airline industry can prove particularly beneficial when used in an integrated fashion. One airline, British Airways, used all of these practices as part of an overall reengineering effort, and it illustrates the benefits of using such an integrated approach. These efforts have helped transform British Airways from a financially troubled, state-owned airline into a successful private sector enterprise. British Airways today is considered among the most profitable airlines in the world and has posted profits every year since 1983. Table 2 shows several key logistics performance measures of British Airways and the Navy. In addition to implementing the four practices discussed earlier, British Airways took a number of other steps to successfully reengineer its logistics operations. One of the first steps was to undertake a fundamental shift in corporate philosophy, where British Airways placed top priority on customer service and cost containment. This philosophy directed all improvement efforts, and specific practices were assessed on how well they furthered these overall goals. Also, British Airways approached the process of change as a long-term effort that requires a steady vision and a focus on continual improvement. Although the airline has reaped significant gains to date, it continues to reexamine and improve its operations. Additional steps taken by British Airways to reengineer its operations include (1) reorienting the workforce toward the new philosophy; (2) providing managers and employees with adequate information systems to control, track, and assess operations; and (3) refurbishing existing facilities and constructing new ones to accommodate the new practices. As part of the Navy’s current efforts to improve the logistics system’s responsiveness and reduce its complexity, we recommend that the Secretary of Defense direct the Secretary of the Navy, working with DLA, to develop a demonstration project to determine the extent to which the Navy can apply best practices to its logistics operations. We recommend that the Secretary of the Navy identify several naval facilities to participate in the project and test specific practices highlighted in this report. The practices should be tested in an integrated manner, where feasible, to maximize the interrelationship many of these practices have with one another. The specific practices that should be tested are inducting parts at repair depots soon after they break, consistent with repair requirements, to prevent parts from sitting idle; reorganizing repair workshops using the cellular concept to reduce the time it takes to repair parts; using integrated supplier programs to shift the management responsibilities for consumable inventories to suppliers; using local supplier distribution centers near repair facilities for quick shipments of parts to mechanics; and expanding the use of third-party logistics services to store and distribute spare parts between the depots and end-users to improve delivery times. We recommend that this demonstration project be used to quantify the costs and benefits of these practices and to serve as a means to identify and alleviate barriers or obstacles (such as overcoming a strong internal resistance to change and any unique operational requirements) that may inhibit the expansion of these practices. After these practices have been tested, the Navy should consider expanding and tailoring the use of these practices, where feasible, so they can be applied to other locations. In its comments on a draft of this report, DOD agreed with the findings and recommendations. DOD stated that by September 30, 1996, the Deputy Under Secretary of Defense (Logistics) will issue a memorandum to the Secretary of the Navy and the Director of DLA, requesting that a demonstration project be initiated. According to DOD, this project should be started by the first quarter of fiscal year 1997. The Navy will conduct a business case analysis and assess the leading-edge practices highlighted in this report for their applicability in a Navy setting and, where appropriate, will tailor and adopt a version of these practices for use in its repair process. DOD also stated that it will ask the Navy to submit an in-process review not later than 6 months after the inception of the business case analysis. Finally, DOD agreed that after the practices have been tested, the Navy should consider expanding and tailoring the use of these practices so they can be applied to other locations. DOD’s comments are included in appendix I. We reviewed detailed documents and interviewed officials about the Navy’s inventory policies, practices, and efforts to improve its logistics operations. We contacted officials at the Office of the Chief of Naval Operations, Washington, D.C.; U.S. Naval Supply Systems Command, Arlington, Virginia; U.S. Naval Air Systems Command, Arlington, Virginia; U.S. Atlantic Fleet Command, Norfolk, Virginia; and the Naval Inventory Control Point, Philadelphia, Pennsylvania. Also at these locations, we discussed the potential applications of private sector logistics practices to the Navy’s operations. To examine Navy logistics operations and improvement efforts, we visited the following locations: Naval Aviation Depot, Cherry Point, North Carolina; Naval Aviation Depot, Jacksonville, Florida; Oceana Naval Air Station, Virginia Beach, Virginia; Jacksonville Naval Air Station, Jacksonville, Florida; Norfolk Naval Air Station, Norfolk, Virginia; Fleet and Industrial Supply Center, Norfolk, Virginia; Fleet and Industrial Supply Center, Jacksonville, Florida; Defense Distribution Depot, Cherry Point, North Carolina; Defense Distribution Depot, Jacksonville, Florida; and U.S.S. Enterprise. At these locations we discussed with supply, maintenance, and aircraft squadron personnel, the operations of the current logistics system, customer satisfaction, and the potential application of private sector logistics practices to their operations. Also, we reviewed and analyzed detailed information on inventory levels and usage; repair times; supply effectiveness and response times; and other related logistics performance measures. Except where noted, our data reflects inventory valued by the Navy at latest acquisition costs. We did not test or otherwise validate the Navy’s data. To identify leading commercial practices, we used information from our February 1996 report that compared Air Force logistics practices to those of commercial airlines. This information included an extensive literature search to identify leading inventory management concepts and detailed examinations and discussions of logistics practices used by British Airways, United Airlines, Southwest Airlines, American Airlines, Federal Express, Boeing, and Tri-Star Aerospace. We also participated in roundtables and symposiums with recognized leaders in the logistics field to obtain information on how companies are applying integrated approaches to their logistics operations and establishing supplier partnerships to eliminate unnecessary functions and reduce costs. Finally, to gain a better understanding on how companies are making breakthroughs in logistics operations, we attended and participated in the Council of Logistics Management’s Annual Conference in San Diego, California. We did not independently verify the accuracy of logistics costs and performance measures provided by private sector organizations. We conducted our review from June 1995 to April 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Defense and the Navy; the Directors of DLA and the Office of Management and Budget; and other interested parties. We will make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. Charles I. (Bud) Patton, Jr. Kenneth R. Knouse, Jr. Best Management Practices: Reengineering the Air Force’s Logistics System Can Yield Substantial Savings (GAO/NSIAD-96-5, Feb. 21, 1996). Inventory Management: DOD Can Build on Progress in Using Best Practices to Achieve Substantial Savings (GAO/NSIAD-95-142, Aug. 4, 1995). Commercial Practices: DOD Could Reduce Electronics Inventories by Using Private Sector Techniques (GAO/NSIAD-94-110, June 29, 1994). Commercial Practices: Leading-Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155, June 7, 1993). DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110, June 4, 1993). DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58, Dec. 5, 1991). Commercial Practices: Opportunities Exist to Reduce Aircraft Engine Support Costs (GAO/NSIAD-91-240, June 28, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined the Navy's aircraft logistics system, focusing on the Navy's efforts to improve and reduce the cost of the system. GAO found that: (1) the best practices identified in the airline industry could improve the responsiveness of the Navy's logistics system and save millions of dollars; (2) the Navy's logistics system is complex and often does not respond quickly to customer needs; (3) the factors contributing to this situation include the lack of spare parts, slow distribution, and inefficient repair practices; (4) some customers wait as long as four months for available parts; (5) the Navy is centralizing its supply management and repair activities, outsourcing certain management functions, and analyzing the effectiveness of its repair pipeline; (6) the best practices employed by the private sector show promise for the Navy because these firms hold minimum levels of inventory, have readily accessible spare parts, and quick repair times; (7) it takes an average of 11 days to repair a broken part in the private sector, as opposed to 37 days in the Navy's repair process; (8) the private-sector average is a result of repairing items immediately after they break, using local distribution centers and integrated supplier programs, and third-party logistic providers; and (9) many of the airline industry's best practices are compatible with the Navy's logistics system. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since its creation in 1970, OMB has had two distinct but parallel roles. OMB serves as a principal staff office to the President by preparing the President’s budget, coordinating the President’s legislative agenda, and providing policy analysis and advice. The Congress has also assigned OMB specific responsibilities for ensuring the implementation of a number of statutory management policies and initiatives. Most importantly, it is the cornerstone agency for overseeing a framework of recently enacted financial, information resources, and performance management reforms designed to improve the effectiveness and responsiveness of federal departments and agencies. This framework includes the 1995 Paperwork Reduction Act and the 1996 Clinger-Cohen Act; the 1990 Chief Financial Officers Act, as expanded by the 1994 Government Management Reform Act; and the 1993 Government Performance and Results Act. OMB faces perennial challenges in carrying out these and other management responsibilities in an environment where its budgetary role necessarily remains a vital and demanding part of its mission. OMB’s resource management offices (RMOs) have integrated responsibilities for examining agency management, budget, and policy issues. The RMOs are supported by three statutory offices whose responsibilities include developing governmentwide management policies: the Office of Federal Financial Management, the Office of Federal Procurement Policy, and the Office of Information and Regulatory Affairs. In fiscal year 1996, OMB obligated $56 million and employed over 500 staff to carry out its budget and management responsibilities. The Results Act requires a strategic plan that includes six elements: (1) a comprehensive agency mission statement, (2) long-term goals and objectives for the major functions and operations of the agency, (3) approaches or strategies to achieve goals and objectives and the various resources needed to do so, (4) a discussion of the relationship between long-term goals/objectives and annual performance goals, (5) an identification of key external factors beyond agency control that could significantly affect achievement of strategic goals, and (6) a description of how program evaluations were used to establish or revise strategic goals and a schedule for future program evaluations. Although OMB’s July draft included elements addressing its mission, goals and objectives, strategies, and key external factors affecting its goals, we suggested that these elements could be enhanced to better reflect the purposes of the Results Act and to more explicitly discuss how OMB will achieve its governmentwide management responsibilities. Furthermore, the July draft plan did not contain a discussion of two elements required under the Results Act: (1) the relationship between the long-term and annual performance goals and (2) the use of program evaluation in developing goals. The structural and substantive changes OMB made to its July 1997 strategic plan constitute a significant improvement in key areas. In general, OMB’s revised plan provides a more structured and explicit presentation of its objectives, strategies, and the influence of external factors. Each objective contains a discussion of these common elements, facilitating an understanding of OMB’s goals and strategies. OMB’s September plan addresses the six required elements of the Results Act. At the same time, enhancements could make the plan more useful to OMB and the Congress in assessing OMB’s progress in meeting its goals. The September plan’s mission statement recognizes both OMB’s statutory responsibilities and its responsibilities to advise the President, and the goals and objectives are more results-oriented and comprehensive than in the July draft. For example, the plan contains a new, results-oriented objective—“maximize social benefits of regulation while minimizing the costs and burdens of regulation”—for its key statutory responsibility regarding federal regulation review. The breadth of OMB’s mission makes it especially important that OMB emphasize well-defined and results-oriented goals and objectives that address OMB’s roles in both serving the President and overseeing the implementation of statutory governmentwide management policies. OMB more clearly defines its strategies for reaching its objectives in the September plan, particularly with regard to some of its management objectives. For example, in the draft plan, OMB did not discuss the accomplishments needed to fulfill its statutory procurement responsibilities. In contrast, the September plan lays out OMB’s long-term goal to achieve a federal procurement system comparable to those of high performing commercial enterprises. It says that OMB will identify annual goals to gauge OMB’s success, and discusses the means and strategies (such as working with agencies to promote the use of commercial buying practices) it will use to accomplish this goal. OMB also commits to working with the Federal Acquisition Regulation Council to revise regulations and publish a best practices document. In the area of regulatory reform, OMB also commits to improving the quality of data and analyses used in regulatory decision-making and to developing a baseline measure of the net benefits for Federal regulations. OMB’s clear and specific description of its strategies for its procurement and regulatory review objectives could serve as models for developing strategies for its Results Act and crosscutting objectives. Although strategies to provide management leadership in certain areas are more specific, other strategies could benefit from a clearer discussion of time frames, priorities, and expected accomplishments. For example, to meet its objective of working within and across agencies to identify solutions to mission-critical problems, OMB states it will work closely with agencies and a list of other organizations to resolve these issues. However, OMB does not describe specific problems it will seek to address in the coming years or OMB’s role and strategies for solving these issues. In defining its mission, goals and objectives, and strategies, OMB’s plan recognizes its central role in “managing the coordination and integration of policies for cross-cutting interagency programs.” The plan states that in each year’s budget, major crosscutting and agency-specific management initiatives will be presented along with approaches to solving them. The plan also provides a fuller discussion than was included in the July draft of the nature and extent of interagency groups that OMB actively works with in addressing a variety of functional management issues. Specific functional management areas, such as procurement, financial, and information management, are incorporated as long-term objectives. However, OMB’s plan could more specifically address how OMB intends to work with agencies to resolve long-standing management problems and high-risk issues with governmentwide implications. For example, in the information management area, OMB’s September plan refers to critical information technology issues, but it does not provide specific strategies for solving these issues. OMB discusses the ability of agencies’ computer systems to accommodate dates beyond 1999 (the Year 2000 problem) as a potential performance measure and states how it will monitor agencies’ progress. However, the plan does not describe any specific actions OMB will take to ensure this goal is met. We have previously reported on actions OMB needs to take to implement sound technology investment in federal agencies. In a related area, OMB has elsewhere defined strategies and guidance for agency capital plans that are not explicitly discussed in the strategic plan. With respect to programmatic crosscutting issues, questions dealing with mission and program overlap are discussed only generically as components of broader objectives (such as working with agencies to identify solutions or to carry out the Results Act). The Congress and a large body of our work have identified the fragmented nature of many federal activities as the basis for a fundamental reexamination of federal programs and structures. Our recent report identified fragmentation and overlap in nearly a dozen federal missions and over 30 programs. Such unfocused efforts can waste scarce funds, confuse and frustrate program customers, and limit overall program effectiveness. The OMB plan states that the governmentwide performance plan, which OMB must prepare and submit as part of its responsibilities under the Results Act, will provide the “context for cross-cutting analyses and presentations,” but provides no additional specification. OMB’s strategic plan also does not explicitly discuss how goals and objectives will be communicated to staff and how staff will be held accountable. For example, OMB’s plan states that OMB staff are expected to provide leadership for and to be catalysts within interagency groups. Yet, the plan does not explain how OMB’s managers and staff will be made aware of and held accountable for this or other strategies for achieving OMB’s goals. As we noted in our review of the July draft plan, OMB’s staff and managers have a wide and expanded scope of responsibilities, and many of OMB’s goals depend on concerted actions with other agencies. In particular, tackling crosscutting issues will also require extensive collaboration between offices and functions within OMB, which the plan could discuss in more detail. In this environment, communicating results and priorities and assigning responsibility for achieving them are critical. The September plan more consistently discusses the relationship between annual and long-term goals as part of a discussion of each of its objectives. The plan provides useful descriptions of the performance measures OMB may use to assess its progress in its annual performance plan. For example, the plan suggests that “clean audit opinions” could measure how OMB is achieving its objective in the area of financial management. Such efforts are noteworthy because some of OMB’s activities, such as developing the President’s budget or coordinating the administration’s legislative program, present challenges for defining quantifiable performance measures and implementation schedules. Although the September plan provides a more consistent and thorough treatment of key external factors in achieving its goals, OMB could explain how it can mitigate the consequences of these factors. For example, OMB states that its goal of ensuring timely, accurate, and high-quality budget documents depends on the accuracy and timeliness of agency submissions of technical budget information. However, there is a role for OMB in assisting agencies to improve the accuracy and timeliness of data, particularly for such complex issues as estimating subsidy costs for loan and loan guarantee programs. OMB’s discussion of program evaluation could provide more information about how evaluations were used in developing its plan and how evaluations will be used to assess OMB’s and federal agencies’ capacity and progress in achieving the purposes of the Results Act. In preparing its strategic plan, OMB states that it reviewed and considered several studies of its operations prepared by OMB, GAO, and other parties. The plan also states that OMB will continue to prepare studies of its operational processes, organizational structures, and workforce utilization and effectiveness. However, OMB does not indicate clearly how prior studies were used, and OMB does not provide details on a schedule for its future studies, both of which are required by the Results Act. OMB officials have said it would be worthwhile to more fully discuss the nature and dimension of program evaluation in the context of the Results Act. As we noted in our review of the July draft plan, evaluations are especially critical for providing a source of information for the Congress and others to ensure the validity and reasonableness of OMB’s goals and strategies and to identify factors likely to affect the results of programs and initiatives. A clearer discussion of OMB’s responses to and plans for future evaluations could also provide insight into how the agency intends to address its major internal management challenges. For example, a critical question facing OMB is whether the approach it has adopted toward integrating management and budgeting, as well as its implementation of statutory management responsibilities, can be sustained over the long term. In view of OMB’s significant and numerous management responsibilities and the historic tension between the two concepts—of integrating or segregating management and budget responsibilities—we believe it is important that OMB understand how the reorganization has affected its capacity to provide sustained management leadership. In our 1995 review of OMB’s reorganization, we recommended that OMB review the impact of its reorganization as part of its planned broader assessment of its role in formulating and implementing management policies for the government. We suggested that the review focus on specific concerns that need to be addressed to promote more effective integration, including (1) the way OMB currently trains its program examiners and whether this is adequate given the additional management responsibilities assigned to these examiners and (2) the effectiveness of the different approaches taken by OMB in the statutory offices to coordinate with its resource management offices and provide program examiners with access to expertise. In commenting on our recommendation, OMB agreed that its strategic planning process offered opportunities to evaluate this initiative and could address issues raised by the reorganization. Although OMB’s plan states that it will increase the opportunities for all staff to enhance their skills and capabilities, it does not describe the kinds of knowledge, skills, and abilities needed to accomplish its mission nor a process to identify alternatives to best meet those needs. In summary, OMB has made significant improvements in its strategic plan. However, much remains to be done in improving federal management. We will be looking to OMB to more explicitly define its strategies to address important management issues and work with federal agencies and the Congress to resolve these issues. Mr. Chairman, this concludes our statement this morning. We would be pleased to respond to any questions you or other Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed how well the Office of Management and Budget's (OMB) strategic plan addresses the Government Performance and Results Act's requirements and some of the challenges remaining for OMB to address in future planning efforts. GAO noted that: (1) since its July 1997 draft, OMB has made changes to the plan based on its continuing planning efforts, congressional consultations, and comments from others; (2) overall, OMB's September 1997 plan addresses all required elements of the Results Act and reflects several of the enhancements GAO suggested in its review of the July draft; (3) specific improvements include: (a) goals and objectives that show a clearer results-orientation; (b) more clearly defined strategies for achieving these goals and objectives; and (c) an increased recognition of some of the crosscutting issues OMB needs to address; (4) however, additional enhancements to several of the plan's required elements and a fuller discussion of major management challenges confronting the federal government could help make the plan more useful to the Congress and OMB; (5) for example, the plan could provide a more explicit discussion of OMB's strategies on such subjects as information technology, high-risk issues, overlap among federal missions and programs, and strengthening program evaluation; (6) OMB's strategic plan indicates that the agency will use its annual performance plan, the governmentwide performance plan, other functional management plans, and the President's Budget to provide additional information about how it plans to address some of these and other critical management issues; (7) GAO will continue to review OMB's plans and proposals as additional detail concerning objectives, time frames, and priorities is established; and (8) GAO's intention is to apply an integrated perspective in looking at these plans, consistent with the intent of the Results Act, to ensure that OMB achieves the results expected by its statutory authorities. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The private sector, driven by today’s globally competitive business environment, is faced with the challenge of maintaining and improving quality service at lower costs. As a result, many firms have radically changed, or reengineered, their ways of doing business to meet customer needs. Since the Department of Defense’s (DOD) environment is also changing, it needs to do the same. With the end of the Cold War, DOD’s logistics system must now support a smaller, highly mobile, high-technology force. Also, due to the pressures of budgetary limits, DOD must seek ways to make logistics processes as efficient as possible. To provide reparable parts for its aircraft, the Air Force uses an extensive logistics system that was based on management processes, procedures, and concepts largely developed decades ago. As of September 1994, the Air Force had invested $33 billion in reparable parts for its fleet of more than 6,800 aircraft. Reparable parts are items that can be fixed and used again, such as hydraulic pumps, navigational computers, landing gear, and wing sections. The Air Force’s logistics system, often referred to as a logistics pipeline, consists of a number of activities, including the purchase, storage, distribution, and repair of parts. The Air Force’s reparable parts pipeline primarily exists to ensure that aircraft stationed around the world at Air Force installations can get the parts they need to keep them operational. It also exists to support aircraft overhaul activities, when aircraft are periodically taken out of service for structural repairs and parts replacements. The Air Force Materiel Command (AFMC) is the organization that has primary responsibility for carrying out pipeline operations. Its tasks include determining how much inventory the Air Force needs to support its fleet, purchasing parts when necessary, and operating the facilities where major parts and aircraft repair are done. To carry out many of these tasks, AFMC has five air logistics centers (ALC) that are located in different regions throughout the United States. Each center is responsible for managing a portion of the reparable parts inventory, repairing certain parts, and overhauling specific types of aircraft. For fiscal year 1996, the Air Force estimates it will cost about $4.6 billion for maintenance of equipment and aircraft at the depot level. Other organizations also play a role in pipeline operations, including Air Force bases around the world, where Air Force aircraft are stationed. Although base maintenance personnel handle minor repairs, they send parts and aircraft to the ALCs for the heavier, more involved repairs. The bases, in turn, order replacement parts through the ALCs, where the bulk of Air Force inventory is stored. Another of these organizations is the Defense Logistics Agency (DLA), which handles the warehousing and distribution operations at each of the five ALCs. In general, new and repaired parts are stored at each center in DLA warehouses until they are needed. When an order is placed for a part, DLA retrieves the part from warehouse shelves and ships it accordingly. DLA also receives the broken items being shipped from the bases and stores them until the ALC repair shops are ready to fix them. Figure 1.1 shows how the Air Force’s inventory was distributed among Air Force bases and the ALCs (including DLA warehouses) as of September 1994. It also shows the amount of inventory in transit between the various locations. DLA plays another important role in pipeline operations; it provides expendable parts needed by the various Air Force repair activities. Expendable parts—also known as consumables—include items such as nuts, bolts, and rivets that are used extensively to fix reparable parts and aircraft. If these items are not readily available, repair operations can stall and lead to large quantities of unrepaired inventory. We have issued a series of reports on private sector practices that could be applied to DOD’s expendable inventories. Each report recommended new techniques that would minimize DLA’s role in storing and distributing expendable inventory. Although not as large as the Air Force, commercial airlines’ operations resemble the Air Force’s in several ways. First, airlines operate out of a number of different airports, and they must provide the aircraft at these locations with the parts they need. Second, airlines must periodically overhaul their aircraft and ensure that repair activities get the necessary parts. Third, the reparable parts pipeline that exists to fulfill these needs involves the purchase, storage, distribution, and repair of parts. In addition, for both the Air Force and commercial airlines, time plays a crucial role in the reparable parts pipeline. The amount of time involved in the various pipeline activities directly affects the responsiveness of logistics operations. For example, the longer it takes to deliver parts to a mechanic, the longer it will be before the aircraft can be repaired and ready for takeoff. Time also has a significant impact on cost. For example, the longer it takes to repair a part, the more inventory an organization must carry to ensure coverage while that part is out of service. Condensing pipeline times, therefore, simultaneously improves responsiveness and drives down costs. Complexity also plays an important role; it adds to costly overhead and pipeline time. For example, if an organization holds multiple layers of inventory at different locations, it must provide the space, equipment, and personnel to accommodate this inventory at each location, all of which contribute to overhead costs. Moreover, if a part must filter through each of these levels before finally reaching the end user, such as a mechanic, each stop along the way adds to pipeline time. As part of our continuing effort to help improve DOD’s inventory management practices, the Ranking Minority Member, Subcommittee on Oversight of Government Management and the District of Columbia, Senate Committee on Governmental Affairs, requested that we compare the Air Force’s management of its $33 billion reparable parts inventory with the operations of leading-edge private sector firms. This report focuses on (1) best management practices used in the commercial airline industry to streamline logistics operations and improve customer service, (2) Air Force reengineering efforts to improve the responsiveness of its logistics system and reduce costs, and (3) barriers that may stop the Air Force from achieving the full benefits of its reengineering efforts. To obtain DOD’s overall perspective on the Air Force’s logistics system and the potential application of private sector practices to its operations, we interviewed officials at the Office of the Under Secretary of Defense for Logistics and Air Force Headquarters, Washington, D.C., and DLA Headquarters, Alexandria, Virginia. We also discussed specific Air Force logistics policies and operations and reviewed inventory records at AFMC, Dayton, Ohio. To examine Air Force repair facilities, other logistics operations, and the new logistics practices being tested in the Air Force, we visited the Sacramento ALC, McClellan AFB, California; San Antonio ALC, Kelly AFB, Texas; Oklahoma City ALC, Tinker AFB, Oklahoma; and Dyess AFB, Texas. At these locations, we discussed maintenance and repair activities and processes, inventory management practices, “Lean Logistics” and reengineering program initiatives, and the potential application of additional private sector practices. We also contacted officials at the Warner Robins and Ogden ALCs to discuss and document the new business practices being tested and planned at those locations. Except where noted, our analysis reflects inventory valued at the last acquisition cost, as of September 1994. As highlighted in this report, the accuracy of Air Force inventory information is questionable. We did not test or otherwise validate the Air Force inventory data. During this review, we selected and physically examined a sample of items from the Air Force inventory that we believe highlighted the effect of the current and past DOD inventory management practices. This judgmental sample was drawn from E-3 and C-135 unique parts. Because we selected these items based on high dollar value, high levels of inventory on hand, and/or low demand rates, the results of our sample analysis cannot be projected to the total Air Force inventory. To identify best management practices being used by the private sector, we reviewed over 200 articles from various management and distribution publications, identified companies that were highlighted as developing innovative management practices, and visited the following organizations in the airline industry: American Airlines Maintenance Center, Tulsa, Oklahoma; British Airways Engineering, Heathrow Airport, United Kingdom; British Airways Avionics Engineering, Llantrissant, South Wales, United British Airways Maintenance Cardiff, South Wales, United Kingdom; United Airlines, San Francisco, California; United Airlines Maintenance Center, Indianapolis, Indiana; Boeing Commercial Airplane Group, Seattle, Washington; Federal Express, Memphis, Tennessee; and Tri-Star Aerospace Corporation, Deerfield Beach, Florida. At each company, we discussed and examined documentation related to the company’s reengineering efforts associated with management, employees, information technology, maintenance and repair processes, and facilities. We also contacted Southwest Airlines to obtain information on its maintenance and material management operations and visited the Northrop-Grumman Corporation aircraft production facility in Stuart, Florida, to examine its integrated supplier operations. To obtain additional information on supplier partnerships and implementation strategies, we participated in an International Quality and Productivity Center symposium on supplier partnerships in Nashville, Tennessee. Representatives from John Deere Waterloo Works; Bethlehem Steel; Federal Express; BP Exploration (Alaska), Inc.; E.I. DuPont; Salem Tools; Volvo GM-Heavy Trucks; Berry Bearing Company; The Torrington Company; Procard, Inc.; Lone Star Gas Company; Coors Brewing Company; Texas Instruments, Inc.; Allied Signal; Oryx Energy Company; Timken; Sun Microsystem, Inc.; Dixie Industrial Supply; Darter, Inc.; Mighty Mill Supply, Inc.; Alloy Sling Chain Industries; Columbia Pipe and Supply Company; Strong Tool Company, Inc.; Id One, Inc.; and Magid Glove and Safety Manufacturing Company, discussed their supplier partnership concepts, implementation strategies, and results. To gain a better understanding of how companies are applying integrated approaches to their logistics operations, we attended an integrated supply chain round table, hosted by Procter and Gamble. Attending this round table were representatives from Chrysler Corporation, Digital Equipment Corporation, E.I. Dupont Corporation, Levi Strauss, Massachusetts Institute of Technology, Siemens Corporation, 3M Corporation, and Xerox Corporation. To determine the ongoing problems of the current Air Force logistics system, we reviewed related reports issued since 1990 by us, the Air Force Audit Agency, and Air Force Logistics Management Agency. We conducted our review from August 1993 to August 1995 in accordance with generally accepted government auditing standards. Commercial airlines have cut costs and improved customer service by streamlining their logistics operations. The most successful improvements include using highly accurate information systems to track and control inventory, employing various methods to speed the flow of parts through the pipeline, shifting certain inventory management tasks to suppliers, and letting third parties handle parts repair and other functions. One of the airlines we studied, British Airways, has substantially reengineered its logistics operations over the last 14 years. These improvements have helped transform British Airways from a financially troubled, state-owned airline into a successful private sector enterprise. British Airways today is considered among the most profitable airlines in the world and has posted profits every year since 1983. British Airways has approached the process of change as a long-term effort that requires a steady vision and a focus on continual improvement. Although the airline has reaped significant gains from improvements to date, it continues to reexamine operations and is making continuous improvements to its logistics system. British Airways has used an integrated approach to reengineer its logistics system. It laid out a clear corporate strategy, determined how logistics operations fit within that strategy, and tied organizationwide improvements directly to those overarching goals. With this approach, the various activities encompassed by the logistics pipeline were viewed as a series of interrelated processes rather than isolated functional areas. For example, when British Airways began changing the way parts were purchased from suppliers, it considered how those changes would affect mechanics in repair workshops. British Airways takes a significantly shorter time than the Air Force to move parts through the logistics pipeline. Figure 2.1 compares British Airways’ condensed pipeline times with the Air Force’s current process by showing how long it takes a landing gear component to move through each organization’s system. British Airways officials described how an integrated approach could lead to a continuous cycle of improvement. For example, culture changes, improved data accuracy, and more efficient processes all lead to a reduction in inventories and complexity of operations. These reductions, in turn, improve an organization’s ability to maintain accurate data, and they stimulate continued change in culture and processes, both of which fuel further reductions in inventory and complexity. Despite this integrated approach, British Airways’ transformation did not follow a precise plan or occur in a rigid sequence of events. Rather, according to one manager, airline officials took the position that doing nothing was the worst option. After setting overall goals, airline officials gave managers and employees the flexibility to continually test new ideas to meet those goals. The five general areas in which British Airways has reengineered its practices are corporate focus and culture, information technologies, material management, repair processes, and facilities. These efforts are summarized in table 2.1 and are discussed briefly after the table and in more detail in appendix I. British Airways officials said changing the corporate mind-set was the single most important aspect of change, as well as the most difficult. Before reforms got underway in 1981, British Airways was an inefficient, over-staffed government organization on the brink of bankruptcy. By 1987, when privatization occurred, British Airways had substantially changed the culture that gave rise to these problems. Converting this culture has entailed appointing new top management from private industry to bring a better business focus to the organization and serve as champions of change; undertaking an initial round of drastic cost cuts, which included a 35-percent reduction in the workforce to eliminate redundant and unnecessary positions; adopting a new corporate focus and strategy in which improving customer service became the driving force behind all improvements; setting new performance measures that reflected customer service goals and corporate financial targets; instituting ongoing training and education programs to familiarize managers and employees with the new corporate philosophy; adopting total quality management principles to promote continual replacing managers who were unwilling or unable to adapt to the new negotiating agreements with employee unions to allow for a more flexible workforce. British Airways officials said the airline could not have successfully reengineered its practices without having the right technological tools to plan, control, and measure operations. As a result, the airline developed three key systems, the most important of which was an inventory tracking system that provides real-time, highly accurate visibility of parts and processes. The three systems have enabled managers and workers to know what parts are on hand, where they are, what condition they are in, when they will be needed, and how well operations are meeting corporate goals. The airline did not delay initiatives to streamline specific processes until changes in corporate culture and upgrades in data systems had been made; it began reexamining its processes concurrently. Two of the areas targeted were the way parts flow in from suppliers as well as how they are stored and distributed internally. Initiatives to streamline these areas have included shifting from in-house personnel to a third-party logistics company the task of arranging, tracking, and ensuring delivery of parts from its primarily North American suppliers and to third-party repair vendors; reducing the number of suppliers from 6,000 to 1,800 and working toward more cooperative relationships with the remaining suppliers; working with key expendable parts suppliers to establish more than 30 local distribution centers near British Airways’ main repair depot, such as the one shown in figure 2.2, to provide 24-hour delivery of such parts; establishing an integrated supplier program in which a key expendable parts vendor has taken on responsibility for monitoring parts usage and determining when to replenish inventory levels; consolidating internal stocking points into strategic locations to reduce inventory layers and improve responsiveness to end users; and installing automated storage, retrieval, and delivery systems to help ensure quick delivery of parts to end users. British Airways also targeted its component repair and aircraft overhaul operations for change because it wanted to speed up the repair process. It has converted a number of workshops to a “cellular” arrangement, which involves bringing the resources needed to repair an item or range of items into one location, or “cell” (see fig. 2.3). These resources include not only the mechanics and the equipment directly involved in the repairs, but also support personnel and inventory. In the past, all of these resources may have been scattered among several different sites. The cellular approach has reduced repair times by simplifying the flow of parts through repair workshops and ensuring that mechanics have the support they need to complete work quickly. While reengineering its processes, British Airways decided to renovate existing structures or build entirely new facilities to accommodate the new practices. Converting to cellular operations, for example, required moving widely scattered workshops under one roof and providing additional space for inventory and support staff. The renovations occurred primarily at British Airways’ main repair depot at London’s Heathrow Airport. Two new facilities were constructed in South Wales to house avionics component repair and Boeing 747 aircraft overhaul activities. British Airways was able to implement the most aggressive changes through the new facilities, called “green field sites” (see fig. 2.4). British Airways, which undertook this new construction after determining that it needed additional capacity, used the new facilities as an opportunity to start with a clean slate. It was able to fully implement state-of-the-art practices in workforce management philosophies, information systems, material management, and repair processes without being hindered by preexisting conditions. For example, one of the most valuable aspects of the green field sites has been British Airways’ ability to establish an entirely new corporate culture. Most employees are new hires, and all had to pass through a rigorous screening process to ensure that they possessed the skills and personal characteristics conducive to the flexible, team-oriented environment envisioned. British Airways’ initiatives have helped improve the responsiveness of logistics operations and reduced associated costs. Table 2.2 shows key performance measures that illustrate the result of British Airways’ efforts. Other airlines have pursued improvements similar to the steps taken by British Airways and have likewise seen dramatic results. For example, United Airlines adopted cellular repair in its engine blade overhaul workshop. As a result, United Airlines has reduced repair time by 50 to 60 percent and decreased work-in-process inventory by 60 percent. Table 2.3 highlights examples of some of the approaches other companies have used. Southwest Airlines differs from other airlines; it contracts out almost all component repair and aircraft overhaul. In selecting repair vendors, Southwest emphasizes the quality of repairs because fewer breakdowns enable it to carry less inventory and keep repair costs down. Southwest also emphasizes the speed of repairs. It stipulates specific repair turnaround times, and it applies penalties whenever these times are exceeded. Manufacturers, suppliers, and third-party logistics providers are also playing a role in streamlining operations and improving the effectiveness of logistics activities. In many cases, these vendors enter partnership-type arrangements with customers that involve longer term relationships and more open sharing of information. The following are examples of vendors that are helping companies better meet logistics needs. Boeing, one of the world’s leading aircraft manufacturers, has adopted a policy in which it promises next-day shipment for all standard part orders unless the customer specifies otherwise. Through its main distribution center in Seattle, Washington, and a network of smaller distribution centers worldwide, Boeing is providing quick order-to-delivery times and making it possible for customers to move from just-in-case toward just-in-time stocking policies. Tri-Star, a distributor of aerospace hardware and fittings, offers an integrated supplier program in which it works closely with customers to manage expendable parts inventories. Its services, which can be tailored to customer requirements, include placing a Tri-Star representative in customer facilities to monitor inventory bins at end-user locations, place orders, manage receipts, and restock bins. Tri-Star also maintains data on usage, determines what to order and when, and provides replenishment on a just-in-time basis. The integrated supplier programs entail other services as well, such as 24-hour order-to-delivery times, quality inspection, parts kits, establishment of electronic data interchange links and inventory bar coding, and vendor selection management. Tri-Star operates integrated supplier programs with nine aerospace companies, including British Airways, the first airline to enter such an arrangement with Tri-Star, and United Airlines, a recent addition. Table 2.4 shows the types of services, reductions, and improvements achieved by Tri-Star for some of its customers (designated as A through E) under the integrated supplier program. FedEx Logistics Services (FLS), a division of express delivery pioneer Federal Express, enables companies to shed certain logistics functions while boosting their capabilities to respond to operational or customer needs. Among its services is PartsBank, in which FLS stores a company’s spare parts at FLS warehouses; takes orders; and retrieves, packs, and ships needed parts. Once a replacement part is received, the customer can place the broken item in the package, and Federal Express will pick up the item and deliver it to the source of repair within 48 hours. FLS provides coverage 24 hours a day, 365 days a year. It also maintains the data associated with these activities and can provide real-time visibility of assets in the warehouse or in transit. In addition to PartsBank, FLS will develop customized services, which involves examining a client’s distribution practices and finding ways to eliminate wasteful steps. In recognition of increasing budgetary pressures, the changing global threat, and the need for radical improvements to its logistics system, the Air Force has begun a reengineering program aimed at redesigning its logistics operations. This program, called Lean Logistics, is testing many of the same leading-edge concepts found in private sector that have worked successfully in reducing cost and improving service. The Air Force, however, could expand and improve Lean Logistics, where feasible, by including closer “partnerships” with suppliers and third-party logistics services, testing the cellular concept in the repair process, and modifying its facilities. Incorporating some of these practices will require the collaboration of DLA and other DOD components. Also, to adopt these concepts Air Force-wide, the Air Force must improve its information system capabilities. Certain issues must be resolved before the Air Force achieves a fully reengineered logistics system that substantially reduces cost and improves service. For example, (1) the basic DOD culture must become receptive to radical new concepts of operations, (2) the traditional role of DLA as a supplier of expendable parts and as a storage and distribution service will be significantly altered, and (3) improvements to outdated and unreliable inventory data systems require management actions and funding decisions that must be made outside the responsibility of both Lean Logistics managers and the entire Air Force. The current Air Force logistics system is slow and cumbersome. Under the current process, the Air Force can spend several months or even years to contract for an item or piece parts and have it delivered or it may take several months to repair the parts and then distribute them to the end user. The complexity of the repair and distribution process creates many different stopping points and layers of inventory as parts move through the system. Parts can accumulate at each step in the process, which increases the total number of parts in the pipeline. The Air Force has developed both a three-level and a two-level maintenance concept to repair component parts. Under the three-level concept (organizational, intermediate, and depot), a broken part must pass through a number of base-level and depot-level steps in the pipeline (see fig. 3.1). After a broken part is removed from the aircraft by a mechanic, it is routed through the base repair process. If the part cannot be repaired at the base, it is sent to an ALC and enters the depot repair system. After it is repaired, the part is either sent back to the base or returned to the DLA warehouse, where it is stored as serviceable inventory. When DLA receives a request for a part, it ships the part to the base, where it is stored until needed for installation on an aircraft. . ....... . ....... Currently, the Air Force estimates that this repair cycle takes an average of 63 days to complete. This estimate, however, is largely based on engineering estimates that do not provide an accurate measure of repair cycle time. The actual repair time may be significantly longer because the Air Force does not include in its estimate the time a part sits in the repair shop or in storage awaiting repair. Under the two-level maintenance concept (organizational and depot), items that were previously repaired at the intermediate base maintenance level will be repaired at the depot level, thus significantly reducing the logistics pipeline, inventory levels, and maintenance personnel and equipment at the base level. In part because of the length of its pipeline, the Air Force has invested $33 billion in reparable aircraft parts and $3.7 billion in expendable parts, totaling $36.7 billion as of September 1994. The Air Force estimates that $20.4 billion of its total inventory is needed to support daily operations and war reserves. The Air Force allocates the remaining 44 percent to other types of reserves to ensure that it will not run out of parts if they are needed. The reserve inventory, valued at $16.3 billion, consists of the following categories: $1.7 billion for safety stocks, which are stocks purchased to ensure the Air Force will not run out of routinely needed parts; $2.8 billion for numeric stockage objective items, which are parts that are not routinely needed but are considered critical to keep an aircraft in operational status, so they are purchased and stored just in case an item fails; and $11.8 billion for items considered in “long supply,” which is a term denoting that more stock is on hand than what is needed to meet current demands, safety, and numeric stockage objective levels, but this stock is not currently being considered for disposal. Figure 3.2 details the Air Force’s allocation of its inventory to daily operations, war reserves, and other categories of stock. Air Force officials have said the Air Force can no longer continue its current logistics practices if it is to effectively carry out its mission in today’s environment. Budgetary constraints in recent years have led to substantial reductions in personnel, leaving the remaining workforce to deal with a logistics operation that has traditionally relied on large numbers of personnel to make it work. At AFMC, the organization primarily responsible for supporting the Air Force fleet, the workforce was reduced by 18.5 percent between 1990 and 1994. Moreover, in June 1995, the Defense Base Realignment and Closure Commission recommended that two of AFMC’s five ALCs be closed. As these ALCs are eventually closed, AFMC will have to find ways to accommodate their workload with the resources that remain. In addition, the end of the Cold War has led to an evolution of the military services’ roles and missions. DOD’s emphasis today is on sustaining a military force that can respond quickly to regional conflicts, humanitarian efforts, and other nontraditional missions. These changing roles and missions, combined with ongoing fiscal constraints, has resulted in DOD’s call for a smaller, highly mobile, high-technology force and a leaner, more responsive logistics system. To address logistics needs, in 1994 DOD issued a strategic plan for logistics that sets forth a series of improvements. This plan, which reflects many of the philosophies found in the private sector, outlines improvements in three areas. First, it calls for reducing logistics response times—the time necessary to move personnel, inventory, and other assets—to better meet customer needs. Second, it calls for a more “seamless” logistics system. The different activities comprising logistics operations are to be viewed and managed as a series of interdependent activities rather than isolated functional areas. Third, the plan seeks a streamlined infrastructure to help reduce overhead costs associated with facilities, personnel, and inventory. The Air Force has described its initiatives to improve its logistics system as the cornerstone of all future improvements. These efforts, spearheaded by AFMC, are aimed at dramatically improving service to the end user while simultaneously reducing pipeline time, excess inventory, and other logistics costs. The initiatives, called Lean Logistics, are still in the early stages and therefore still evolving. Nonetheless, AFMC began testing certain practices through small-scale demonstration projects in October 1994, with promising results to date. In addition, AFMC plans to begin testing additional, broader-based process improvements in fiscal year 1996. The demonstration projects underway as of March 1995 involved less than 1 percent of Air Force inventory items and tested the following primary concepts: (1) consolidated serviceable inventories, in which minimum levels of required inventory were stored in centralized distribution points in ALCs; (2) rapid transportation of parts between bases and ALCs; (3) repair of broken parts at ALCs as they arrive from bases or as centralized inventory levels drop; and (4) improved tracking of parts through the repair process. Each ALC tested some combination of these concepts and was identifying the information system improvements needed to adopt these practices on a wider scale. The tests scheduled to begin in fiscal year 1996 are aimed at broadening these efforts. Teams involving personnel from AFMC headquarters and each ALC have been redesigning five underlying business processes to overhaul the way parts are bought, distributed, and repaired. The teams are now determining how the redesigned processes must fit together so that reforms can be carried out in an integrated manner. Table 3.1 shows the business areas being addressed and briefly describes how each process will be changed. The test projects currently underway have demonstrated that the Air Force could sustain operations with significantly fewer parts. For example, at the Sacramento ALC, where all four concepts are being tested, 62 percent ($52.3 million) of the items involved in the project were identified as potential excess. Similarly, at the Warner Robins ALC, 52 percent ($56.3 million) of the items in its test program were identified as potential excess. AFMC has recently developed a preliminary plan for implementing its Lean Logistics concepts commandwide. Although these concepts could substantially improve operations, Air Force efforts to date are not as extensive as they could be. A number of leading-edge practices that have worked successfully in the private sector in reducing cost and improving service are not currently incorporated into the Lean Logistics program. These include the following: Use of third parties: The current Lean Logistics program does not include the use of third-party logistics services to store and distribute reparable parts between the bases and depot repair centers. As discussed in chapter 2, these services not only provide delivery of parts within 48 hours, they also alleviate information technology shortfalls by independently tracking parts through the storage and distribution process. Fast information system capability improvements: The Air Force expects the information technology improvements needed to expand Lean Logistics initiatives to come from two sources—commercial software for interim solutions to its current needs and DOD-wide system improvements being managed by the Joint Logistics Systems Center (JLSC) for long-term solutions. These long-term solutions may not be available for 5 to 10 years. In contrast, British Airways fully implemented information system improvements within 3 years. Supplier partnerships and reduced supplier base: The Air Force has not incorporated the concept of an integrated supplier into the Lean Logistics program. As discussed in chapter 2, British Airways and some aircraft manufacturers have significantly improved their logistics systems using this concept. Improved availability of expendable parts is critical to reducing the amount of time it takes to repair component parts. Supplier distribution centers: Similar to the integrated supplier program, the supplier distribution center is a technique used by British Airways to minimize the amount of time it takes to receive parts from a suppler. Currently, the Lean Logistics program is not testing this concept. Cellular concept for repair processes: To minimize the amount of time it takes to repair parts, British Airways adopted the cellular concept that centralizes the functions and resources needed to repair a part (e.g., testing, cleaning, machining, tooling, and supplies) in one location. British Airways also applied this concept to the aircraft overhaul facilities. The Lean Logistics program has not planned to test this concept. Modernize existing or build new facilities to reflect new business practices: To adopt the cellular concept and improve the storage and distribution of parts, British Airways modernized existing facilities. To maximize the impact of their entire reengineered process and corporate culture, British Airways built green field site facilities and staffed them with employees selected for their technical competence as well as their flexibility for new processes and team orientation. Although new construction and modernization of logistics facilities is a very difficult aspect of reengineering for the Air Force because of base closures and funding limitations, this aspect of reengineering could be a consideration when future logistics decisions are made for supporting new weapon systems. A number of these additional initiatives would require new relationships between the Air Force and commercial suppliers, distributors, and other third parties. To develop these relationships, the Air Force and DLA must work together because, under the current system, DLA is the primary supplier to the Air Force for expendable items and provides a storage and distribution service for Air Force reparable parts. Several major obstacles stand in the way of the Air Force’s efforts to institutionalize its reengineered logistics system. These obstacles include the following: The “corporate culture” within DOD and the Air Force has been traditionally resistant to change. Organizations often find changes in operations threatening and are unwilling to change current behavior until proposed ideas have been proven. This kind of resistance must be overcome if the Air Force is to expand its radical new concepts of operations. One of the largest obstacles to speeding up repair times is the lack of expendable parts needed to complete repairs. With a new approach to better serve its military customers, the role of DLA as the traditional supplier of consumable items and as a storage and distribution service is changing. However, at this point, DLA is still considering alternative approaches to manage expendable parts and is discussing these new concepts with contractors and the services. Until these new approaches are implemented, the Air Force’s ability to improve the repair process may be limited. Some of the biggest gains available to the Air Force, such as improvements to outdated and unreliable inventory data systems, require management actions and funding decisions that must be made outside the responsibility of both Lean Logistics managers and the entire Air Force. In addition, some of these systems will not be fully deployed throughout the Air Force for 5 to 10 years. Changes in corporate culture must accompany efforts to transform operations if progress is to continue within the Air Force reengineering program. According to a Lean Logistics official, the current mindset may hinder Lean Logistics for several reasons. First, people find radical changes in operations threatening and, as is common in many organizations, resist efforts to change. Second, Lean Logistics is still a relatively new concept, and personnel lack a thorough understanding of what it is and how it will improve operations. As a result, they are unwilling to change current behaviors until Lean Logistics concepts are proven. Third, Lean Logistics does not yet have support from all of the necessary functional groups within AFMC, the Air Force, and DOD. This support will be needed if the full range of changes is to be carried out. In June 1994, we convened a symposium on reengineering that brought together executives from five Fortune 500 companies that have been successful in reengineering activities. The following principles for effective reengineering, reflecting panel members’ views, emerged from the symposium: Top management must be supportive of and engaged in reengineering efforts to remove barriers and drive success. An organization’s culture must be receptive to reengineering goals and principles. Major improvements and savings are realized by focusing on the business from a process rather than functional perspective. Processes should be selected for reengineering based on a clear notion of customer needs, anticipated benefits, and potential for success. Process owners should manage reengineering projects with teams that are cross-functional, maintain a proper scope, focus on customer metrics, and enforce implementation timelines. Panel members at the symposium expressed the view that committed and engaged top managers must support and lead reengineering efforts to ensure success because top management has the authority to encourage employees to accept reengineered roles. Also, top management has the responsibility to set the corporate agenda and define the organization’s culture and the ability to remove barriers that block changes to the corporate mindset. For example, the Vice President of Reengineering at Aetna Life and Casualty Insurance Company said, “Top management must drive reengineering into the organization. Middle management won’t do it.” The panelists agreed that a lack of top management commitment and engagement is the cause of most reengineering failures. According to the Corporate Headquarters Program Manager of Process Management at IBM, “To be successful, reengineering embedded in the fiber of our people until it becomes a way of life.” To develop a corporate culture that is receptive to reengineering, the panelists emphasized the importance of communicating reengineering goals consistently on all levels of the organization, training in skills such as negotiation and conflict resolution, and tailoring incentives and rewards to encourage and reinforce desired behaviors. One of the largest obstacles to speeding up repair times is the lack of expendable parts needed to complete repairs. Supplier-operated local distribution centers could help ensure quick availability of such parts. Similarly, integrated supplier programs, in which certain inventory management responsibilities are shifted to the supplier, are also aimed at improving expendable item support. We have strongly urged DLA to endorse the use of aggressive just-in-time concepts whose principal objectives are to transfer inventory management responsibilities to key distributors. Existing information systems are also an obstacle because they do not always provide the accurate, real-time information needed to expand current efforts beyond their limited scope. According to AFMC’s deputy chief of staff for logistics, AFMC is working with systems that have not been significantly improved in 15 years. As a result, much of the data used to run the Lean Logistics demonstration projects have been collected manually, a task that project leaders said would be impossible under an Air Force-wide program. Improvements to material management and depot maintenance information systems—key to success of the Lean Logistics initiatives—are under the control of JLSC. JLSC is staffed with personnel from the military services and DLA, and is trying to standardize data systems across DOD. These systems, however, will not be fully deployed throughout the Air Force for 5 to 10 years. Currently, AFMC officials are working with JLSC officials to define Air Force requirements. They are also working to develop short-term solutions to enable the Lean Logistics program to move forward using commercial software. According to one Lean Logistics official, however, AFMC may have trouble pursuing and later adopting many of these short-term solutions because funding for systems outside of JLSC’s umbrella is severely limited. The current Air Force logistics system is inefficient and costly compared with leading-edge business practices. AFMC has recognized the need for radical change and is beginning to pursue some of these practices. Because some of the results to date have been promising, these efforts should be supported and expanded. The Air Force, however, could build on its reengineering effort by including additional practices pursued and successfully adopted by the private sector. In addition, current and future AFMC initiatives will be seriously hindered unless top-level DOD commitment and engagement is received, and all affected Air Force organizations and other DOD components—specifically DLA and JLSC—fully support AFMC’s efforts. DLA’s support will be critical for developing local distribution centers and integrated supplier programs to meet the Air Force requirements for expendable parts. JLSC officials may have to find ways that will allow the Air Force the flexibility to use existing commercial software to resolve existing information technology weaknesses and expand its reengineering initiatives. Without these logistics system improvements, the Air Force will continue to operate a logistics system that results in billions of dollars of wasted resources. Given the budget reductions it has already absorbed, the Air Force might not be able to provide effective logistics support to future DOD operations. To build on the existing Air Force reengineering efforts and achieve major logistics system improvements, we recommend that the Secretary of Defense commit and engage top-level DOD managers to support and lead Air Force reengineering efforts to ensure its success. We also recommend that the Secretary of Defense direct the Secretary of the Air Force to incorporate additional leading-edge logistics concepts into the existing Lean Logistics program, where feasible. Specific concepts that have been proven to be successful and should be considered, but have not been incorporated in the current Air Force program include installing information systems that are commercially available to track inventory amounts, location, condition, and requirements; counting existing inventory once new systems are in place to ensure accuracy of the data; establishing closer relationships with suppliers; encouraging suppliers to establish local distribution centers near major repair depots for quick shipment of parts; using integrated supplier programs to shift to suppliers the responsibility for managing certain types of inventory; using third-party logistics services to manage the storage and distribution of reparable parts and minimize DOD information technology requirements; reorganizing workshops, using the cellular concept where appropriate, to reduce the time it takes to repair parts; and integrating successful reengineered processes and flexible, team-oriented employees in new facilities (like the green field sites) to maximize productivity improvements, as new facilities are warranted to meet changes in the types and quantities of aircraft. In addition, we recommend that the Secretary of the Air Force (1) prepare a report to the Secretary of Defense that defines its strategy to adopt these leading practices and expand the reengineering program Air Force-wide and (2) establish milestones for the report’s preparation and issuance and identify at a minimum the barriers or obstacles that would hinder the Air Force from adopting these concepts; the investments (people, skills, and funding) required to begin testing these new concepts and the projected total costs to implement them Air Force-wide; the potential savings that could be realized; and the Air Force and other DOD components whose support will be needed to fully test these new concepts. We further recommend that the Secretary of Defense use the Air Force’s report to set forth the actions and milestones to alleviate any barriers or obstacles (such as overcoming resistance to organizational change and improving outdated inventory information systems), provide the appropriate resources, and ensure the collaboration between the Air Force and other DOD components that would enable the Air Force to achieve an integrated approach to reengineering its processes. Once these steps are taken, we recommend that the Secretary of Defense direct the Secretary of the Air Force to institutionalize a reengineering effort that is consistent with successful private sector reengineering efforts. These efforts include communicating reengineering goals and explaining them to all levels of the organization, training in skills to enable employees to work across functions and modifying this training as necessary to support the reengineering process, and tailoring rewards and incentives to encourage and reinforce desired behaviors. In commenting on a draft of this report, DOD generally agreed with the findings, conclusions, and recommendations, and stated that the Air Force’s Lean Logistics program should receive top-level DOD support in achieving its goals. DOD also stated that the Air Force should consider incorporating additional leading-edge practices into its reengineering effort. According to DOD, the Air Force will be asked to provide a report to the Secretary of Defense by July 1996 that will discuss the feasibility of including such additional practices in the Lean Logistics initiative and to address other concerns raised in this report. By October 1996, the Office of the Secretary of Defense will address how it plans to alleviate any barriers and obstacles identified in the Air Force’s report. DOD indicated that the Air Force plans to take steps to institutionalize its reengineering efforts by December 1996. | Pursuant to a congressional request, GAO reviewed the Air Force's management of its reparable parts inventory, focusing on: (1) commercial airline industry practices to streamline logistics operations and improve customer service; (2) Air Force reengineering efforts to improve its logistics system and reduce costs; and (3) barriers to the Air Force's reengineering efforts. GAO found that: (1) the commercial airline industry, including certain manufacturers, suppliers, and airlines, are using leading-edge practices to improve logistics operations and reduce costs; (2) in recognition of increasing budgetary pressures, the changing global threat, and the need for radical improvements in its logistics system, the Air Force has begun a reengineering program aimed at redesigning its logistics operations; (3) GAO has urged these changes and supports them, and has identified additional private-sector practices that may result in even greater savings; (4) there are several major barriers to bringing about change that must be addressed and resolved if the Air Force is to reengineer its logistics system and save billions of dollars; (5) the Air Force reengineering effort addresses inherent problems with its logistics system, but additional steps can be taken to maximize potential improvements; (6) additional steps GAO identified that could enhance this program include establishing a top-level DOD champion of change to support the Air Force initiatives, greater use of third-party logistics services, closer partnerships with suppliers, encouraging suppliers to use local distribution centers, centralizing repair functions, and modifying repair facilities to accommodate these new practices; (7) the success of the Air Force in achieving a quantum leap in system improvements hinges on its ability to address and overcome certain barriers, such as inherent organizational resistance to change; (8) top-level DOD officials must be supportive of and engaged in Air Force reengineering efforts to remove these barriers and drive success; (9) information systems do not always provide Air Force managers and employees with accurate, real-time data on the cost, amount, location, condition, and usage of inventory; and (10) without the support of top-level DOD management and accurate, real-time inventory information, the expansion of the Air Force's reengineering efforts could be seriously impaired. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 1824 the Corps has been responsible for constructing and maintaining a safe, reliable, and economically efficient navigation system. Today, this system is comprised of more than 12,000 miles of inland waterways, 300 large commercial harbors, and 600 small harbors. From fiscal years 1998 through 2002, the Corps has removed an average of about 265 million cubic yards of material each year from the navigable waters of the United States, at an average annual cost of about $856 million (in constant 2002 dollars). Private industry performs most of the overall dredging, except for the work done by hopper dredges, in which both the Corps and industry perform a significant amount of the work. Of the $856 million spent annually on overall dredging, about $197 million is spent on all hopper dredging (both maintenance and new construction), with industry vessels accounting for about $148 million annually and Corps vessels accounting for about $49 million. Each of the Corps’ hopper dredges typically operates in a specific geographic area. The Wheeler, a large-class dredge, usually operates in the Gulf of Mexico. The McFarland, a medium-class dredge, usually operates in the Atlantic and Gulf of Mexico. The Essayons, a large-class dredge, and the Yaquina, a small-class dredge, typically work along the Pacific coast. Legislation enacted in the 1990s sought to further increase the role of industry in hopper dredging by placing operational restrictions on the Corps’ hopper dredges. Specifically, the Energy and Water Development Appropriations Act for fiscal year 1993 and subsequent appropriations acts required the Corps to offer for competitive bidding 7.5 million cubic yards of hopper dredging work previously performed by the federal fleet. Since fiscal year 1993, the Corps has addressed this requirement by reducing the use of each of its four dredges from about 230 workdays per year to about 180 workdays per year. The Water Resources Development Act for fiscal year 1996 required the Corps to initiate a program to increase the use of private hopper dredges principally by taking the Wheeler out of active status and placing it into ready reserve. The Corps implemented this requirement by allowing the Wheeler to work 55 days a year plus emergencies (which includes urgent and time-sensitive dredging needs). The 1996 act did not alter the Corps’ duty to implement the dredging program in the manner most economical and advantageous to the United States, and it restricted the Corps’ authority to reduce the workload of other federal hopper dredges. The conference report that accompanied the act directed the Corps to periodically evaluate the effects of the ready reserve program on private industry and on the Corps’ hopper dredge costs, responsiveness, and capacity. The Energy and Water Appropriations Act for fiscal year 2002 placed another restriction on the use of the Corps’ dredge McFarland, limiting it to emergency work and its historical scheduled maintenance in the Delaware River (about 85 workdays per year). Taken together, these restrictions have increased private industry’s share of the hopper dredging workload. In theory, restrictions on the use of the Corps’ hopper dredges could generate efficiency and cost-savings benefits to both government and industry. For example, restricting the Corps’ hopper dredges to fewer scheduled workdays could make them more available to respond to emergency dredging needs. In addition, the increase in demand for dredging by private industry could lead to improvements in dredging efficiency. If achieved, firms might be able to dredge the same amount of material at a lower cost or more material at the same cost. Furthermore, if more work were provided to the private hopper dredging industry, competition could increase if the existing dredging firms expanded their fleets or more firms entered the market. Consequently, the prices that the government pays to contractors could fall. However, economic principles also suggest that if an industry is given more work without increasing capacity or the number of competing firms, prices could rise because the demand for its services has increased. The Corps’ and private industry’s respective roles in the hopper dredging market have changed since legislation enacted in 1978 prompted a movement toward privatization of hopper dredging in the United States. Since that time, the Corps has gradually reduced its hopper dredging fleet from 14 to 4 vessels, while a private hopper dredging industry of five firms and 16 vessels has emerged. Corps officials and representatives from the dredging industry, selected ports, and the maritime industry generally agreed that the Corps needs to retain at least a small hopper dredge fleet to (1) provide additional dredging capacity during peak demand years, (2) meet the emergency and national defense needs identified in the 1978 legislation, and (3) provide an alternative work option at times when the industry offers unreasonable bids or no bids at all. To determine the reasonableness of private contractor bids, the Corps develops a government cost estimate for its hopper dredging solicitations. If the low bid is no more than 25 percent above the government cost estimate, the Corps awards the contract. If all bids exceed the government cost estimate by more than 25 percent, the Corps may pursue a number of options, including performing the work itself. The practical value of this protection against high bids, however, has been limited by the Corps’ use of some outdated contractor cost information and its continued use of an expired policy to calculate transit costs. Before 1978, the Corps performed all of the nation’s hopper dredge work. In 1978, the Congress passed legislation to encourage private industry participation in all types of dredging and required the Corps to reduce the fleet of federal vessels to the minimum necessary for national defense and emergency purposes, as industry demonstrated its capability to perform the work. According to the Senate committee report associated with the 1978 legislation, one of the law’s main purposes was to provide incentives for private industry to construct new hopper dredges. Between 1978 and 1983, as a private hopper dredging industry began to emerge, the Corps reduced its hopper dredge fleet from 14 to its current 4 vessels. By the late 1980s, the Corps stopped assigning its hopper dredges to new construction projects (primarily channel deepening), leaving this work entirely to private industry. Both Corps and private industry hopper dredges continue to perform maintenance work on existing channels. From fiscal years 1998 through 2002, the Corps’ dredges performed about 28 percent of the nation’s hopper dredging maintenance work, annually dredging about 16 million cubic yards of material at a cost of about $49 million (in constant 2002 dollars). During the same period, industry dredges performed about 72 percent of the nation’s hopper dredging maintenance work, dredging about 40 million cubic yards of material annually, at a cost of about $93 million. As a result of the 1978 legislation, seven firms emerged to compete for the Corps’ hopper dredging contracts. Consolidation and firm buy-outs in the 1990s have left five firms in today’s market. (Appendix II contains a more detailed description of the U.S. hopper dredge fleet.) Corps officials and representatives from the dredging industry, selected ports, and the maritime industry generally agreed that the Corps’ hopper dredge fleet currently (1) provides additional dredging capacity during peak demand years, (2) meets emergency dredging and national defense needs identified in the 1978 legislation, and (3) provides an alternative work option when industry provides no bids or when its bids exceed the government cost estimate by more than 25 percent. In addition, representatives of selected ports and the maritime industry generally supported the Corps’ retention and operation of a federal hopper dredge fleet to ensure that dredging needs are met in a timely manner. One of the reasons for the Corps to maintain a hopper dredge fleet is that changes in annual weather patterns, such as El Niño, and severe weather events, such as hurricanes and floods, can create a wide disparity in the demand for hopper dredging services from year to year. During fiscal year 1997 the Corps and private industry used their hopper dredges for maintenance work to remove almost 77 million cubic yards nationwide. In contrast, during fiscal year 2000 they removed about 50 million cubic yards. (See fig. 2.) Hopper dredging needs at the mouth of the Mississippi River are particularly variable from year to year, with annual dredging requirements ranging from 2 million to 50 million cubic yards. Representatives from private dredging firms maintain that industry is not likely to build the additional capacity needed to meet demand in peak years. Corps officials and representatives from the dredging industry, selected ports, and the maritime industry generally agreed that the federal government should provide the additional dredging capacity required to meet the needs of peak demand years. The Corps’ hopper dredges are also needed to respond to emergency dredging assignments. For example, according to a Corps official, it was necessary for the Corps to send the Essayons to finish work on a project in Alaska that was critical to complete before the winter season and freezing conditions set in. In addition, Corps vessels have been used during instances where industry has submitted no bids in response to solicitations. For example, when rains in the Mississippi River Basin caused a build-up of material in navigation channels, the Corps issued a solicitation, but no bids were received because industry vessels were unavailable. Consequently, the Wheeler was used to perform the work. In such situations, the Corps’ fleet acts as insurance to meet dredging needs, ensuring that shipping patterns are not adversely affected. The existence of the Corps’ fleet theoretically offers a measure of protection against inordinately high bids from private contractors. While the private dredging market consists of 16 dredges owned by five firms, not all dredges compete for any given solicitation because (1) some, if not most, hopper dredges are committed to other jobs; (2) hopper dredges may be in the shipyard; (3) differences in hopper dredge size and capability mean that not all hopper dredges are ideally suited to perform the work for a particular job; and (4) hopper dredges cannot quickly move from one dredging region to another. For example, large hopper dredges may have difficulty maneuvering in small inlet harbors, and small hopper dredges may be inefficient at performing large projects with distant disposal sites. Thus, the Corps’ hopper dredge fleet provides an alternative dredging capability that can be brought to bear when private dredges are not available or when private industry bids are deemed too high. The Corps’ government cost estimate for hopper dredging work is pivotal in determining the reasonableness of private contractor bids. The Corps is required to determine a fair and reasonable estimate of the costs for a well-equipped contractor to perform the work. By law, the Corps may not award a dredging contract if the price exceeds 25 percent of the government estimate. In such cases, the Corps has several options. It can (1) cancel the solicitation, (2) readvertise the solicitation, (3) consider challenges to the accuracy of the Corps’ cost estimate by bidders, (4) convert the solicitation into a negotiated procurement, or (5) use one of its own dredges to do the work. The accuracy of the Corps’ cost estimate depends on having access to up- to-date cost information. Although the Corps adjusts contractor cost data annually to reflect current pricing levels, this step does not account for fundamental changes, such as an industry vessel reaching the end of its depreciable life or industry acquisition of new vessels. The Corps has not obtained comprehensive industrywide contractor cost information since 1988. Since then, contractors have provided the Corps updated cost information to support specific costs included in the Corps’ cost estimates that they believe to be outdated, but they are not required to provide updated information for all costs. As a result, the Corps only has updated cost information that contractors provide. In our discussions with Corps officials, they acknowledged the need to initiate an effort to obtain and verify current cost data for industry vessels. In addition, the Corps continues to follow an expired policy when calculating contractor transit costs to the dredge site, further calling into question the accuracy of the government cost estimates. The Corps’ Engineering Regulation 1110-2-1300, which called on the Corps to calculate industry transit costs to the dredge site based on the location of the second-closest industry dredge, expired in 1994. However, the Corps continues to use this method when calculating transit costs for at least some of its solicitations. For example, Corps officials followed the expired policy when demonstrating to us how they calculated the transit costs for a solicitation in Washington State. In this case, the second- closest industry dredge was located in the Gulf of Mexico, and the estimated transit costs amounted to about $480,000 because the vessel would have had to travel thousands of miles and go through the Panama Canal. However, the private contractor’s dredge that performed the work was located fewer than 500 miles from the dredge site, for which the transit costs were estimated to be about $100,000. After bringing this issue to the Corps’ attention, the Corps told us that it plans to reexamine its transit cost policies. Restrictions on the Corps’ hopper dredge fleet, which began in fiscal year 1993, have imposed costs on the Corps’ dredging program, but have thus far not resulted in proven benefits. Most of the costs of the Corps’ hopper dredges are incurred regardless of how frequently the dredges are used. A possible benefit of the restrictions is that they could eventually encourage more firms to enter the market or existing firms to add capacity, which, in turn, may promote competition, improve dredging efficiency, and thus reduce prices. Although there has been an increase in the number of private industry hopper dredges since the restrictions were first imposed, the number of private firms in the hopper dredging market has decreased. In addition, during the same time period, the number of contractor bids per Corps solicitation has decreased, while the number of winning bids exceeding the Corps’ cost estimate has increased. Restrictions on the Corps’ vessels could also potentially enhance the Corps’ responsiveness to emergency dredging needs. However, the Corps is unable to evaluate whether emergency dredging needs have been met more or less efficiently since the restrictions went into effect because it does not specifically identify and track emergency work performed by either Corps or industry vessels. The Corps incurs many of the costs for maintaining and operating its hopper dredges regardless of how much the vessels are used. Thus, as shown in table 1, when the Wheeler was placed in ready reserve and restricted to 55 workdays plus emergencies, the average number of days it worked per year and its productivity (measured by cubic yardage dredged) declined by about 56 percent, while its costs declined by only 20 percent. Crew size declined by about 21 percent, but payroll costs declined by just 2 percent because dredging needs required the Corps to pay the smaller crew overtime to finish the work. In addition, fuel costs did not drop in proportion to use and productivity because the vessel’s engines were utilized for shipboard services (e.g., electricity) while it remained at the dock—a necessary procedure for maintaining minimal vessel readiness. Other costs unrelated to crew or fuel represent the plant or capital costs of a dredge, many of which the Corps incurs regardless of how much a dredge is used. The Corps refers to the difference between a vessel’s total costs and the value of the dredging services it provides (the net cost) as a “subsidy.” The Corps estimates the annual subsidy to maintain the Wheeler idle in ready reserve at about $8.4 million. This subsidy is a direct cost of ready reserve. In addition to the subsidy, the Corps must pay contractors to do the work the Wheeler no longer performs. The difference between the vessel’s traditional workload and its current workload is approximately 6.6 million cubic yards. Depending on whether private industry hopper dredges are able to perform this work in aggregate at a lower or higher cost than if the Wheeler performed the work, the total cost to government of the Wheeler in ready reserve status could be either lower or higher than the Corps’ estimated subsidy. In addition to the Wheeler’s subsidy, restrictions have led to inefficient operations for the other Corps hopper dredges, resulting in additional costs for the Corps. According to Corps officials, September is the ideal time to dredge in the Pacific Northwest, because dredging conditions generally deteriorate in October. The officials mentioned that, at times, the Essayons and the Yaquina have reached their fiscal year operating limits and returned to port in September, before the projects they were working on were complete. The dredges were sent back to complete the project after the new fiscal year began in October, even though weather conditions may have made dredging conditions less than optimal, and the Corps incurred additional transit costs. According to some Corps officials, the annual operating limit cannot be extended. For example, the Essayons stopped dredging the mouth of the Columbia River and returned to port at the end of fiscal year 2001 when it reached its operating limit. The vessel returned to finish the work at the start of the new fiscal year, but adverse weather conditions prevented it from fully dredging the river. As a result, some projects may be postponed until the following fiscal year, reprioritized, or canceled altogether. A potential benefit of the restrictions on the Corps’ hopper dredge fleet is that an increase in demand for industry’s dredging services could encourage existing firms to make capital investments (e.g., build new dredges or improve existing dredges) or encourage more firms to enter the dredging market. Dredging industry representatives told us that the restrictions have already led to an increase in the number of industry vessels and, as evidence, pointed to the addition of two new dredges, the Liberty Island, a large-class dredge introduced in 2002, and the Bayport, a medium-class dredge introduced in 1999, as well as the return of the Stuyvesant, a large-class dredge, to the U.S. hopper dredging market. Moreover, they added that since the restrictions, the private hopper dredging industry has also made improvements and enhancements to its existing fleet—specifically the Columbia—thus improving the efficiency of its dredging operations and increasing the capacity of its vessels. However, the representatives also told us that the restrictions are only one of several factors the private hopper dredging industry considers when deciding to acquire or build an additional dredge. In addition, firms must invest in equipment to remain competitive in any industry. As a result, it is unclear to what extent the restrictions on the Corps’ vessels were a factor in industry’s investment decisions to increase its fleet size and add dredging capacity. While the private hopper dredging industry has recently placed two new dredges on line, it has sold the small-class dredge Mermentau and placed another small-class dredge, the Northerly Island, up for sale. In addition, during the last decade the private hopper dredging industry has decreased from seven firms to five firms. Specifically, since 1993, two firms exited the market, one firm entered the market, and two firms merged. The consolidation in the industry does not necessarily mean that competition has been reduced because the new industry structure could have resulted in enhanced capacity, flexibility, and efficiency for the remaining firms. However, it is uncertain whether the private hopper dredging industry is more or less competitive now than it was prior to the restrictions. Historical data reveal that, in general, as shown in figure 3, in years when more material is available to private industry, industry submits fewer bids per Corps solicitation. For example, during fiscal year 1991, when the Corps estimated that 31.3 million cubic yards of maintenance material would be contracted out to industry, the average number of bids per solicitation was 3.2. In contrast, during fiscal year 1998, when the Corps estimated that 53.7 million cubic yards of maintenance material would be contracted out to industry, industry submitted an average of about 2 bids per solicitation. Likewise, as shown in figure 4, in years when there were fewer industry bids per Corps solicitation, the average winning industry bid, as a percentage of the Corps’ cost estimate, was higher. For example, during fiscal year 1991, when the average number of bids per solicitation was 3.2, the average winning bid was 79 percent of the Corps’ estimate. In contrast, during fiscal year 1998, when the average number of bids per solicitation was 2, the average winning bid was 97 percent of the Corps’ estimate. In general, when there are fewer industry bids per solicitation, the winning industry bid relative to the Corps’ cost estimate increases. In fiscal years 1990 through 2002, more than half of the solicitations for hopper dredging maintenance work received just one or two bids from private contractors. During these years, when only one contractor bid on a solicitation, the bid exceeded the government estimate 87 percent of the time. In contrast, when there were three or more bids on a solicitation, the winning bid exceeded the government estimate only 22 percent of the time. After the Corps’ hopper dredge fleet was effectively restricted to 180 workdays (fiscal years 1993 through 2002), the number of industry bids per solicitation declined from about 3 to roughly 2.4. Specifically, as shown in figure 5, when there were no limits on the use of the Corps’ hopper dredges (fiscal years 1990 through 1992), only 5 percent of solicitations received one bid. After limits were placed on the Corps’ hopper dredges (fiscal years 1993 through 2002), 19 percent of solicitations had only one bid. Moreover, before the restrictions, 67 percent of the solicitations had three or more bids, whereas, after the restrictions, only 42 percent had three or more bids. These changes might have been expected because, after the restrictions, industry’s share of hopper dredging work increased while the number of hopper dredging firms decreased from seven to five. Furthermore, in the time period following the imposition of the 180-day restriction, the frequency with which the winning industry bid exceeded the Corps’ cost estimate has increased. For example, as shown in figure 6, prior to the restrictions, the winning bid exceeded the Corps’ cost estimate 24 percent of the time. After the restrictions were imposed, the winning bid exceeded the Corps’ estimate 45 percent of the time. This finding is consistent with economic principles; that is, all else equal, an increase in demand for dredging by private industry with fixed supply would result in higher prices. It should be noted that the extent to which the restrictions contributed to the decrease in the number of industry bids per Corps solicitation and the increase in the winning industry bid relative to the Corps’ cost estimate is unknown. Other factors could have also contributed to these changes. For example, an increase in the demand for hopper dredging services for new construction projects or beach nourishment could lead to a decrease in the number of bids received for maintenance projects. Similarly, the introduction of environmental restrictions on when hopper dredging can take place could contribute to an increase in the winning industry bid relative to the Corps’ cost estimate. Nevertheless, the decrease in the number of bids per solicitation and the increase in bids exceeding the Corps’ cost estimates raises questions about the effects the restrictions may have had on competition and prices, demonstrating the need for a comprehensive analysis of the effects of the restrictions on competition, efficiency, and prices. Another potential benefit of restrictions on the use of the Corps’ vessels is enhanced responsiveness to emergencies. However, there is disagreement within the Corps on this issue. One Corps official believes that a dredge in ready reserve is better able to handle emergencies than if it were working 180 days because it is in a “standby” status at the dock, ready to respond. In contrast, others in the Corps believe that a dredge can respond just as well or better to an emergency while working a full schedule because the dredge can temporarily halt the project it is working on, respond to the emergency, and then return to its scheduled work. During our discussions with representatives from selected ports and the maritime industry, we did not learn of any instances of problems in the Corps’ responsiveness to emergencies prior to restrictions or instances of improved response time since the restrictions went into effect. A major reason that the Corps is unable to evaluate whether emergency dredging needs have been met more or less efficiently since the restrictions went into effect is that its dredging database—the Dredging Information System—does not specifically identify and track emergency work performed either by Corps or industry vessels. Consequently, the Corps cannot readily determine how many days have been needed for each of its vessels to respond to emergencies. In addition, the Corps does not know whether it is paying contractors more or less for performing the emergency dredging projects compared to the costs it pays for routinely scheduled maintenance work. Such information would be a valuable tool for determining how emergency dredging needs can be met in a manner that is the most economical and advantageous to the government—that is, when and under what circumstances to contract with the private hopper dredging industry for these emergencies or when to use Corps vessels. In discussing this issue, Corps officials agreed that obtaining information on emergencies is important for managing their hopper dredging program and told us they have initiated efforts to collect such data to incorporate into their dredging database. In a June 2000 report to the Congress, the Corps stated that the placement of the Wheeler in ready reserve had been a success and recommended that the vessel remain in ready reserve. However, the report contained a number of analytical and evidentiary shortcomings, and, when asked, the Corps could not provide any supporting documentation for its recommendation. In addition, the report also proposed that the McFarland be placed in ready reserve, but the Corps did not conduct an analysis to support this proposal. The costs to place the McFarland in ready reserve are likely to be similar to the costs incurred by placing the Wheeler in ready reserve. Because the McFarland’s workload would be reduced from 180 days to 55 days plus emergencies, the Corps would incur annual costs of about $8 million when the vessel is idle—largely because much of a vessel’s costs are incurred regardless of its level of use. Furthermore, according to the Corps, the McFarland will require at least a $25 million capital investment to ensure its safety, operational reliability, and effectiveness for future service. It is questionable whether such an investment in a vessel that would be placed in ready reserve and receive only minimal use is in the best interest of the government. The Water Resources Development Act for 1996 required the Corps to determine whether (1) the Wheeler should be returned to active status or continue in ready reserve status or (2) another federal hopper dredge should be placed in ready reserve status, and issue a report to the Congress on its findings. The Corps issued the required report in June 2000, recommending that the Wheeler remain in reserve and proposing that an additional dredge, the McFarland, also be placed in reserve. However, when asked, the Corps official who authored the report told us that he did not have any supporting documentation for the report. In addition, the report had a number of evidentiary and analytical shortcomings. For example, the evidence presented in the report showed that the price the government paid to the industry for hopper dredging was higher in the 2 years after the Wheeler was put in ready reserve than it was the year before. This raises questions about the validity of the recommendation contained in the report. Furthermore, the report did not contain a comprehensive analysis. A comprehensive economic analysis of a government program or policy would identify all the resulting costs and benefits, and, where possible, quantify these measures. Both the quantitative and qualitative costs and benefits would need to be compared and evaluated to determine the success or failure of a program and to potentially be used as a basis for future policy decisions. With regard to the restrictions on the Corps’ hopper dredges, a comprehensive economic analysis might contain, among other things, all costs associated with the nonuse of the vessel and the potential benefits that might result due to efficiency gains, increased competition, and lower prices. The analysis might also examine whether ports, harbors, and access channels were maintained more or less effectively, or whether emergency dredging needs were met in a more or less timely and cost-effective manner following implementation of the restrictions. The Corps has not demonstrated that placing an additional hopper dredge in ready reserve, specifically the McFarland, would be beneficial to the United States. In its June 2000 report to the Congress on the ready reserve status of the dredge Wheeler, the Corps proposed that the McFarland be the next dredge placed in reserve. However, the Corps did not offer any analysis on the potential costs of placing an additional Corps hopper dredge in reserve or the benefits of such an action. Moreover, to be available for future use, the 35-year-old McFarland requires at least a $25 million capital investment to ensure its safety, operational reliability, and effectiveness. The repairs include asbestos removal; repairs to the hull; engine replacement; and upgrades of equipment, machinery, and other shipboard systems. It is questionable whether spending $25 million to rehabilitate the McFarland and then placing it in ready reserve is prudent. Furthermore, if the McFarland were placed in ready reserve, the Corps would incur annual costs similar to the subsidy that is already incurred for the Wheeler. Because the Wheeler’s costs do not vary proportionally to its use, the costs to operate the vessel 55 days a year plus emergencies in ready reserve is only marginally less than if it were to operate 180 days a year. The Corps estimates that if the McFarland were placed in ready reserve, it would require an annual subsidy of about $8 million to remain idle. The Corps would also need to contract out the work the McFarland would no longer be doing—approximately 2 to 3 million cubic yards per year. Depending on whether private industry hopper dredges are able to perform this work in aggregate at a lower or higher cost than if the McFarland performed the work, the total cost to government of the placing the McFarland in reserve could be either lower or higher than the estimated annual subsidy. Finally, placing the McFarland in ready reserve could increase competition if such restrictions spurred an increase in investment in private hopper dredges. However, it is questionable whether placing the McFarland in ready reserve would provide enough incentive for industry to make additional investments. Hopper dredges play a critical role in keeping the nation’s ports open for both domestic and international trade. This function has been and will likely continue to be carried out through a mix of private industry and government-owned dredges. At issue is how to use this mix of dredges in a manner that maintains the viability of the private fleet while minimizing the costs to government. The Corps has proposed to the Congress that additional restrictions on the use of its hopper dredges are warranted, but it cannot provide any analytical evidence to support its position. The limited evidence that does exist indicates that these restrictions have imposed costs on the government, while the benefits are largely unproven. Unless and until the Corps gathers the data, comprehensively analyzes the costs and benefits of restrictions on the use of its hopper dredges, and takes the steps to update its cost estimates, there is no assurance that the nation’s hopper dredging needs are being met in a manner that is the most economical and advantageous to the government. In an effort to discern the most economical and advantageous manner in which to operate its hopper dredge fleet, and because the Corps has been unable to support, through analysis and documentation, the costs and benefits of placing its hopper dredges in ready reserve, we recommend that the Secretary of the Army direct the Corps of Engineers to obtain and analyze the baseline data needed to determine the appropriate use of the Corps’ hopper dredge fleet including, among other things, data on the frequency, type, and cost of emergency work performed by the Corps and the private hopper dredging industry; contract type; and solicitations that receive no bids or where all the bids received exceeded the Corps’ estimate by more than 25 percent; prepare a comprehensive analysis of the costs and benefits of existing and proposed restrictions on the use of the Corps’ hopper dredge fleet—including limiting the Corps’ dredges to 180 days of work per year, placing the Wheeler into ready reserve, limiting the McFarland to its historic work in the Delaware River, and placing the McFarland into ready reserve status; and assess the data and procedures used to perform the government cost estimate when contracting dredging work to the private hopper dredging industry, including, among other things, (1) updating the cost information for private industry hopper dredges and (2) examining the policies related to calculating transit costs. We provided a draft of this report to the Acting Assistant Secretary of the Army and the Dredging Contractors of America for review and comment. In a letter dated March 21, 2003, the Department of the Army (Army) provided comments on a draft of this report. The Army agreed with our recommendations and provided time frames for implementing each of them. It also provided additional comments suggesting clarification and elaboration on a number of issues discussed in our report. See appendix III for the Army’s comments and our responses. In a letter dated March 3, 2003, the Dredging Contractors of America (DCA) provided detailed comments on a draft of this report. DCA generally agreed with our recommendations. However, it believed strongly that reducing the scheduled use of the Corps’ hopper dredges has resulted in proven benefits. We continue to believe that the relationship between the restrictions and benefits to the government are unproven because (1) the Corps incurs costs related to the underutilization of its dredges, and (2) since the restrictions were first imposed, the Corps has received fewer industry bids per solicitation, and the percentage of winning industry bids that exceed the Corps’ cost estimates has increased. See appendix IV for DCA’s comments and our responses. We conducted our review between January 2002 and February 2003 in accordance with generally accepted government auditing standards. A detailed discussion of our scope and methodology is presented in appendix I. We will send copies of the report to the Secretary of the Army, appropriate congressional committees, and other interested Members of Congress. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix V. To assess the changing roles of the Corps and industry in hopper dredging and the characteristics of the hopper dredging industry, we obtained Corps’ studies and data from the Corps’ Navigation Data Center that provided information on the hopper dredging requirements of the United States, including the quantity of material dredged annually by the Corps and the private hopper dredging industry, and their associated costs. We also reviewed the laws that define these roles. In addition, we interviewed Corps officials; representatives from the five hopper dredging firms (B+B Dredging Co., Inc., Bean Stuyvesant LLC, Great Lakes Dredge & Dock Company, Manson Construction Co., and Weeks Marine, Inc.); the maritime industry (the Delaware River Port Authority, Maritime Exchange for the Delaware River and Bay, Navios Ship Agencies, Inc., and the Steamship Association of Louisiana); dredging and port associations (Dredging Contractors of America, Pacific Northwest Waterways Association, and American Association of Port Authorities); and selected ports (Portland, Seattle, New York/New Jersey, New Orleans, and Wilmington). To obtain a better understanding of hopper dredging from the perspective of the private hopper dredging industry, we visited and toured a medium-class industry hopper dredge working in the Chesapeake and Delaware Canal and interviewed its crew. Moreover, we reviewed the Corps’ cost estimating policies. To determine the intent and effects of the restrictions placed on the use of the Corps’ hopper dredge fleet, we analyzed the laws governing the use of the Corps’ hopper dredges. We also reviewed studies conducted by the Corps and the Pacific Northwest Waterways Association. For qualitative information, we obtained documents and interviewed Corps officials from headquarters and district and division offices, including Jacksonville, New Orleans, Philadelphia, Portland, Walla Walla, and the North Atlantic Division, as well as representatives from the private hopper dredging firms, selected ports, dredging and port associations, and the maritime industry. For quantitative information, we performed descriptive statistical analyses using data on the winning contractor bids, estimated industry dredging volumes, and the Corps’ cost estimate available from the Corps’ Dredging Information System database. To evaluate whether further restrictions on the Corps’ hopper dredge fleet, including placing the Corps’ dredge McFarland in ready reserve, are justified, we reviewed studies and analyses performed by the Corps to support its proposal to place the McFarland in ready reserve. We also interviewed officials from the Corps and representatives from the private hopper dredging industry, selected ports, and the maritime industry to gain their views on the possible effects on competition and emergency response if the current restrictions on the Corps’ hopper dredges, particularly the McFarland, were modified. To determine the costs associated with repairing the McFarland, we obtained and analyzed cost estimates for the repairs prepared by the Corps’ Philadelphia district office and discussed the estimates with Corps district and headquarters officials. We also visited and toured the McFarland when it was working in the Delaware River and interviewed the McFarland’s crew and Corps officials from the Philadelphia district and the North Atlantic Division offices. We conducted our review between January 2002 and February 2003 in accordance with generally accepted government auditing standards. There are currently 20 hopper dredges operating in the United States. (See table 2.) Of the 20 dredges, 4 are small-class hopper dredges, 10 are medium-class hopper dredges, and 6 are large-class hopper dredges. Of the 16 private hopper dredges, Great Lakes Dredge & Dock Company owns 7, Manson Construction Co. owns 3, and the remaining firms (B+B Dredging Co., Inc., Bean Stuyvesant LLC, and Weeks Marine, Inc.) each own 2. 1. As discussed in our report, the Corps’ cost estimate is pivotal in determining the reasonableness of private contractors bids, and by law the Corps may not award a contract if the bid price exceeds the cost estimate by more than 25 percent. Consequently, we believe that it is critical for the Corps to have comprehensive data for all costs and all industry vessels. The Army recognized in its comments that the cost information for industry hopper dredges is outdated and needs to be evaluated, and has initiated an effort to improve the cost data. While we recognize that updating the cost data could potentially increase or decrease the Corps’ cost estimates, we believe that unless the Corps has updated cost data for all industry vessels, there is no assurance that the Corps’ cost estimates are a reliable tool for determining whether industry bids are within 25 percent of the government estimate as required by law. The Army’s suggestion of clustering several navigation projects for west coast contracts—similar to the Dredging Contractors of America’s comment numbered 3—is one of several possible options for addressing the costs of moving dredges to and from the west coast region. 2. In our report, we illustrated how a rigid interpretation of the Corps’ policy that limits the number of days its vessel can operate resulted in inefficient operations. We recognize that the Corps’ hopper dredge owning district has the flexibility to schedule the dredge within the maximum allowable number of days. However, because time-sensitive dredging needs may disrupt the scheduled use of the dredge, we believe that it would be prudent for the Corps to examine whether there is a need for some flexibility in implementing the annual operating restrictions on the Corps vessels. As discussed in our report, the Corps incurs many of the costs for maintaining and operating its hopper dredges regardless of how much the vessels are used. While it is true that the Corps would save contracting costs if the river is not shoaling and the work previously performed by the Wheeler does not need to be done, the Corps is still paying money to maintain the Wheeler idle in reserve when the vessel could be working to pay for its costs. We recognize that it is plausible that private industry’s hopper dredging costs could decrease over time if their vessels performed more work. However, more important to the government, is how any potential decrease in industry costs are passed along to the government in the form of lower prices. The data in our report raise questions about whether any cost savings industry has realized have trickled down to the government. The Army’s suggestion regarding a sensitivity analysis is one of many analyses that it may wish to consider in its comprehensive analysis of the costs and benefits of existing and proposed restrictions on the use of the Corps’ hopper dredges. 3. As acknowledged in our report, private industry has increased its hopper dredging capacity. However, the exact change in capacity and the degree to which the capacity increases are attributable to the restrictions on the Corps vessels is uncertain. While it is plausible that the restrictions may have caused industry to make these capital improvements, representatives of the dredging industry told us that the restrictions were one of several factors that they considered before building or acquiring additional vessels, including the construction of the Bayport and the Liberty Island. It is uncertain whether these investments occurred as a result of the restrictions or whether the investments were necessary to remain competitive in the industry. Hypothetically, more vessels and increased capacity should translate to more bids and lower bid prices. However, our analysis showed that the number of industry bids per hopper maintenance dredging solicitation declined from about 3 bids before restrictions to roughly 2.4 bids after restrictions were placed on the Corps vessels. This finding reinforces the need for a comprehensive analysis of the benefits and costs of the restrictions on the Corps’ dredges. 4. The Army’s comment reinforces our concerns about whether the restrictions have resulted in proven benefits. This is one of the issues that should be considered in the comprehensive analysis we are recommending. 5. The Army recognizes the need to update the information being collected by its Dredging Information System and has initiated efforts to address this issue. Obtaining and analyzing such information is an important prerequisite to determine whether all hopper dredging needs, in particular time-sensitive needs, are being met in the manner most cost-effective to the government. While the Army refers to a mechanism they have developed with industry to ensure that time-sensitive and urgent dredging needs are managed, we believe it is premature to claim that the process has resulted in meeting time-sensitive dredging needs in a cost-effective manner. 6. The Army’s comments did not address the lack of supporting documentation for its June 2000 Report to Congress. Instead, the Army reiterated points it has made in its previous comments and raised a number of other issues related to hopper dredging. Until a comprehensive analysis is performed on the benefits and costs of restrictions on the Corps’ hopper dredge fleet, there is no assurance that the Nation’s hopper dredging needs are being met in the manner that is most economic and advantageous to the government. DCA generally agreed with our recommendations. However, DCA strongly believes that reducing the scheduled use of the Corps’ hopper dredges has resulted in proven benefits. DCA stated that available information and data show that benefits have resulted. However, we believe the relationship between the restrictions on the Corps’ hopper dredge fleet and benefits to the government remains unproven. First, the extent to which use restrictions on the Corps’ vessels were a factor in industry’s investment decisions to increase its fleet size and add dredging capacity is unclear. Second, the analysis provided by DCA to support its claim is not persuasive; it covered an insufficient period of time and presented data in a potentially misleading fashion. Specifically, DCA only included data for activities that occurred after the implementation of the first restriction on the Corps’ dredges. We believe that an analysis of the effects of the restrictions should include data covering the period before and after the restrictions because the time period before restrictions establishes the appropriate baseline to compare changes resulting from the restrictions. Discussed below are our corresponding detailed responses to DCA’s nine numbered comments in the three-page attachment to its letter. DCA also provided 21 pages of appendices, which we have not included in this final report because of the length. However, we have considered all of DCA’s comments in our response. 1. We have added language to expand our description of the legislation enacted in 1996 that further increased the role of private industry in hopper dredging. 2. We disagree that the Corps receives adequate, updated contractor cost information through claims and other audit-related activities. As part of this process, industry only provides the Corps updated information to support specific costs that they believe are outdated. They are not required to provide updated information for all costs. In addition, the updated information obtained through claims and other audit-related activities do not ensure that data are collected consistently for each of the vessels. For a vessel involved in multiple claims, the Corps may have more up-to-date costs than a vessel with fewer claims. DCA stated in its comments that current cost information should be used because industry faces increasing labor, fuel, maintenance, and insurance costs. As mentioned in our report, the Corps adjusts estimated costs annually to reflect current price levels. These adjustments, however, do not account for fundamental changes, such as a vessel reaching the end of its depreciable life, which may also affect the cost estimate. For example, according to a Corps official, industry vessels are depreciated over 20 to 25 years. In 2003, 9 of the 16 industry vessels were 20 years or older and thus, may be nearing the end of their depreciable lives. Unless the Corps has updated data for all costs and for all industry vessels, there is no assurance that the Corps’ cost estimates are a reliable tool for determining whether industry’s bids are within 25-percent of the government estimate as required by law. 3. As our report recommends, we believe the Corps should examine its policies related to calculating transit costs. We agree that DCA’s suggestion is one of several possible options for addressing this issue. 4. The extent to which the restrictions on the Corps vessels caused industry to make the investments that DCA cited as proven benefits is unclear. First, representatives of the dredging firms told us the restrictions were only one of several factors they considered before building or acquiring additional vessels, including the construction of the Bayport and Liberty Island. Second, firms must routinely replace and update equipment to remain competitive in any industry. While DCA stated that there was a substantial investment in the Columbia following restrictions, the vessel was originally built in 1944 and designed to transport military equipment during World War II. We believe it is plausible that the restrictions on the Corps’ vessels may have contributed to industry’s investment decisions; however, it is unclear to what extent the restrictions contributed to these decisions. 5. While private industry has added capacity, we question the basis for DCA’s calculation of the exact change in capacity and the degree to which the capacity increases are attributable to restrictions on the Corps’ hopper dredges. Over half of the increase in capacity cited by DCA is attributable to the return of one vessel—the Stuyvesant—to service in the United States. However, the Stuyvesant worked in the United States prior to the restrictions, and thus it is questionable whether this constitutes an increase in capacity. With regard to the portion of capacity increase due to the construction of the Bayport and the Liberty Island, as previously stated in response 4 above, the owners of these vessels said the restrictions were only one of several factors they considered in their decisions to build these two vessels. For these reasons, we believe it is questionable whether the capacity increases cited by DCA are proven benefits of the restrictions. 6. We believe that DCA’s claims are based on incomplete information and can be misleading because its analysis only included data after the implementation of the first restriction in fiscal year 1993. As a result, DCA only examined the marginal effects after the Wheeler was placed in ready reserve, but not the effects of all the restrictions. We believe a more appropriate analysis of the effects of the restrictions would compare data covering the periods before and after all restrictions because the time period before restrictions establishes the appropriate baseline to compare changes resulting from the restrictions. The following example illustrates how not examining the entire time period before and after all restrictions may produce incomplete and misleading results. We found that the percentage of bids less than the Corps’ cost estimate was 55 percent after the fiscal year 1993 restriction went into effect (fiscal years 1993 through 2002) and 58 percent after the Wheeler was placed in reserve (fiscal years 1998 through 2002). This finding is consistent with DCA’s claim, and taken alone could be viewed as an improvement. However, prior to the 1993 restriction (fiscal years 1990 through 1992), 76 percent of the winning bids were less than the Corps’ cost estimate. Thus, although there has been an increase in the percentage of bids less than the Corps’ cost estimate following reserve of the Wheeler, this change is significantly less than what occurred before the restrictions. Furthermore, in an appendix to its comments, DCA criticized our approach of presenting data as averages across a number of years to assess the effects of the restrictions, and argued that a year-to- year evaluation should be used. However, in addition to restrictions on the Corps’ fleet, a number of other factors can lead to changes in the number of bids per solicitation and winning bid relative to the Corps’ cost estimate from one year to the next. For example, high water flows in the Mississippi River can result in high accumulation of material at the mouth of the Mississippi River and increase the demand for time-sensitive dredging requirements. During such periods, the winning bids relative to the Corps’ cost estimate may increase. However, the information necessary to control for these factors is unavailable. For example, the Corps does not collect data on time-sensitive dredging needs. As a result, we believe that presenting changes as averages across a number of years is more appropriate because it mitigates for the annual variability in the factors that can also affect the number of bids per Corps solicitation and winning bid relative to the Corps’ cost estimate. 7. We disagree with DCA’s comment. In fact, the historical data do indicate that, in general, in years when more material is available to industry, industry submits fewer bids per Corps solicitation. The information presented in figure 3 in our report, shows that there is an inverse relationship between the estimated volume of material dredged and the annual bids per solicitation, which is statistically significant at the 95 percent confidence level. 8. DCA agreed that seven companies operated in the U.S. hopper dredging market prior to the fiscal year 1993 restriction, while five companies remain in the market today. However, DCA stated that the number of companies competing on a nationwide basis has increased from four to five in the last 10 years. Regardless of whether dredging firms operated on a regional or national basis, prior to the restrictions seven firms provided hopper dredging services and now there are five firms. Furthermore, as recognized in our report, the consolidation in the industry does not necessarily mean that competition has been reduced because the new industry structure could have resulted in enhanced capacity, flexibility, and efficiency for the remaining firms. Moreover, regardless of the number of firms in the industry, DCA acknowledged that the number of bids is more indicative of competition than merely the number of companies. As stated in our report, the number of industry bids per Corps solicitation has decreased on a nationwide basis from approximately 3 bids in the 3 years prior to the restrictions (fiscal years 1990 through 1992) to roughly 2.4 bids in the period following the restrictions (fiscal years 1993 through 2002). 9. We agree with DCA’s comment, which is already addressed by our recommendations. In addition, Chuck Barchok, Diana Cheng, Richard Johnson, Jonathan McMurray, Ryan Petitte, and Daren Sweeney made key contributions to this report. | The fiscal year 2002 Conference Report for the Energy and Water Development Appropriations Act directed GAO to study the benefits and effects of the U.S. Army Corps of Engineers' (Corps) dredge fleet. GAO examined the characteristics and changing roles of the Corps and industry in hopper dredging; the effect of current restrictions on the Corps' hopper dredge fleet; and whether existing and proposed restrictions on the fleet, including the proposal to place the McFarland in ready reserve, are justified. In addition, GAO identified concerns related to the government cost estimates the Corps prepares to determine the reasonableness of industry bids. In response to 1978 legislation that encouraged private industry participation in dredging, the Corps gradually reduced its hopper dredge fleet from 14 to 4 vessels (the Wheeler, the McFarland, the Essayons, and the Yaquina) while a private hopper dredging industry of five firms and 16 vessels has emerged. Dredging stakeholders generally agreed that the Corps needs to retain at least a small hopper dredge fleet to (1) provide additional dredging capacity during peak demand years, (2) meet emergency dredging needs, and (3) provide an alternative work option when industry provides no bids or when its bids exceed the government cost estimate by more than 25 percent. In reviewing the cost estimation process, GAO found that the Corps' estimates are based on some outdated contractor cost information and an expired policy for calculating transit costs. The restrictions on the use of the Corps' hopper dredge fleet that began in fiscal year 1993 have imposed costs on the Corps' dredging program, but have thus far not resulted in proven benefits. The Corps estimates that it spends $12.5 million annually to maintain the Wheeler in ready reserve, defined as 55 workdays plus emergencies, of which about $8.4 million is needed to cover the costs incurred when the vessel is idle. A possible benefit of restrictions on the Corps' vessels is that they could eventually encourage existing firms to add dredging capacity or more firms to enter the market, which, in turn, may promote competition, improve dredging efficiency, and lower prices. Although there has been an increase in the number of private industry hopper dredges since the restrictions were first imposed, the number of private firms in the hopper dredging market has decreased. In addition, during the same time period, the number of contractor bids per Corps solicitation has decreased, while the number of winning bids exceeding the Corps' cost estimates has increased. Although the Corps proposed that the McFarland be placed in ready reserve, it has not conducted an analysis to establish that this action would be in the government's best interest. Specifically, in a June 2000 report to the Congress, the Corps stated that the placement of the Wheeler in ready reserve had been a success and proposed that the McFarland also be placed in ready reserve. However, when asked, the Corps could not provide any supporting documentation for its report. Furthermore, according to the Corps, future use of the McFarland will require at least a $25 million capital investment to ensure its safety, operational reliability, and effectiveness. Such an investment in a vessel that would be placed in ready reserve and receive only minimal use is questionable. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
A number of federal statutes prohibit housing discrimination, but the Act is the most comprehensive. This report focuses on enforcement of fair housing rights under the Act, which is one of the federal government’s central tools for fighting discrimination in housing. The Act (as amended) prohibits discrimination on the basis of race, color, religion, national origin, sex, handicap, and familial status. The Act applies to certain issues, including discrimination in the sale, rental, or advertising or financing of housing, the provision of brokerage services; and other activities related to residential real estate transactions. The Act covers all “dwellings,” which are defined generally as buildings designed to be used in whole or part for a residence, as well as vacant land offered for sale and lease for constructing or locating a building, with some exceptions. The enforcement process granted to HUD and others under the Act has been expanded since the law’s enactment in 1968. The original Act gave no enforcement powers to HUD—other than the ability to investigate and conciliate complaints—and gave limited enforcement powers to private persons and the Attorney General. Under the 1968 Act, private persons who believed they had been discriminated against in housing could enforce the Act by filing a complaint with HUD, and HUD could investigate and conciliate those complaints. The 1968 Act had no mechanism for HUD to adjudicate complaints, so HUD had no options for further enforcement if conciliation efforts failed. The 1968 Act also authorized aggrieved persons to bring a civil action within 180 days of the date of the alleged discrimination. The relief that courts could provide in such cases included only injunctive relief, actual damages, punitive damages up to $1,000, and, where the plaintiff was not able to pay his or her own attorneys’ fees, those fees. Under the 1968 Act, the Attorney General could initiate a civil suit under some circumstances—for example, when there was reasonable cause to believe that a “pattern or practice” of resistance had emerged to the provisions of the Act. The Attorney General could also bring a suit if a group of persons had been denied a right granted by the Act that raised an issue considered to be of general public importance. However, damage awards were not available in actions brought in these types of cases. The 1988 amendments to the Act provided HUD, private persons, and the Attorney General with more tools and remedies for enforcement. Currently, under the Act as amended, there is an adjudication mechanism, so HUD’s enforcement efforts need not end if conciliation efforts do not succeed. Additionally, aggrieved parties can elect not to utilize the administrative enforcement process and can file civil actions in federal court within 2 years of the alleged discrimination; and the Act provides for actual and punitive damages without limitation and for injunctive relief and attorneys’ fees. As under the original 1968 Act, presently the Attorney General can bring a civil action in pattern or practice cases or cases of public importance. The 1988 amendments allow the Attorney General to commence civil actions in cases of breached conciliation agreements or discriminatory housing practices referred by HUD and to enforce subpoenas. In cases commenced by the Attorney General, courts can award civil penalties up to $100,000 for the second violation, in addition to compensatory monetary damages and attorneys’ fees. The 1988 Act also created a deadline of 100 days for HUD’s investigation and reasonable cause determination. FHEO directs HUD’s enforcement efforts, although some state and local FHAP agencies handle most enforcement efforts for their states and localities. FHEO refers complaints alleging violations of state and local fair housing laws that are administered by a certified FHAP to that agency. A certified agency that has entered into a memorandum of understanding with FHEO is eligible to participate in the Fair Housing Assistance program. Under this program, FHAP agencies receive funding for fair housing activities and must conform to reporting and record maintenance requirements, agree to on-site technical assistance, and agree to implement policies and procedures provided to the agencies by FHEO. FHEO has staff in each of HUD’s 10 regional offices, called “hubs,” through which it conducts its enforcement efforts. FHEO staff has responsibility for the intake, investigation, and resolution of some of these complaints. Aggrieved persons may also go directly to FHAP agencies, which then perform the intake process. However, FHEO must ultimately approve the filing of all complaints involving alleged violations of the Act. If an aggrieved party contacts a FHEO office regarding discrimination that allegedly occurred in a state or locality that has a FHEO-certified “substantially equivalent” state or local agency (that is, a FHAP agency), FHEO will complete the intake process and refer the complaint to that agency for enforcement. Under the Act and HUD’s implementing regulations, FHEO can certify an agency if (1) the rights and remedies available under the state or local laws are substantially equivalent to those available under the Act, and (2) the operation of the agency demonstrates that it meets performance standards for timely and thorough fair housing complaint investigations, conciliation, and enforcement. The local law must require the agency to meet the 100- day investigation benchmark contained in the Act. Although the FHAP agency enforcement process must be substantially similar to the HUD process, it need not be exactly the same. That is, FHAP agencies review incoming complaints to determine if they allege a violation of their state or local fair housing laws, the Act, or both; investigate complaints to determine if fair housing laws have been violated; and provide for the final adjudication of complaints, but each of the 100 FHAP agencies can take different actions to accomplish these tasks. These certified state and local agencies could be civil rights agencies like the Michigan Department of Civil Rights. FHEO field offices monitor the FHAP agencies, review cases FHAP agencies investigate to determine if the agencies are eligible for payment under FHAP, and provide technical assistance. FHEO field offices also have responsibility for other functions, such as assessing compliance with fair housing regulations for entities receiving federal funds, providing community education and outreach efforts for fair housing issues, and managing grants under the Fair Housing Initiatives Program, which funds public and private entities combating discriminatory housing practices. FHEO tracks fair housing enforcement efforts through its Title Eight Automated Paperless Office Tracking System (TEAPOTS) database. FHEO enforcement personnel input information at major stages of the enforcement process, such as when a complaint is filed at FHEO, when an investigation of a complaint begins, and when a case is resolved. FHEO managers use TEAPOTS to track the progress of fair housing cases and enforcement efforts. All FHAPs have access to TEAPOTS and are required to report their performance information (such as timeliness milestones for initiated and completed investigations) into TEAPOTS or other data and information systems technology agreed to by HUD. TEAPOTS captures numerous aspects of enforcement efforts both nationally and by hub, including the numbers and lengths of enforcement actions; characteristics of complaints, such as basis of discrimination (race, religion, etc.) and subject matter of discrimination (sale, rental, etc.); and type of resolution. The fair housing enforcement process consists of three stages: (1) intake, during which FHEO or FHAP agencies receive inquiries from individuals with housing discrimination concerns and determine whether those concerns involve a potential violation of the Act; (2) investigation, during which FHEO or FHAP agency investigators collect evidence to determine if reasonable cause exists to believe that a discriminatory housing practice has occurred or is about to occur; and (3) adjudication, during which an independent fact finder determines whether the person charged with discrimination (the respondent) did in fact violate the Act. The independent fact-finding may occur before an administrative law judge (ALJ) or, if one of the parties chooses, a federal court or state court for complaints filed with HUD and FHAP agencies, respectively. The Act and other guidelines establish timeliness benchmarks for completing certain parts of the enforcement process. An overview of HUD’s basic fair housing enforcement process is shown in figure 1. In the intake stage, FHEO hubs receive inquiries (called “claims” from 1996 to 2001), determine which ones involve a potential violation of the Act, and file fair housing complaints for those that do. FHAP agencies also receive inquiries and work with complainants to determine whether a potential violation of the Act, state or local law, or both has occurred. According to FHEO headquarters staff, the process usually starts when an individual contacts a FHEO hub by telephone, fax, or mail; in person; or over the Internet. Intake analysts refer numerous contacts they receive that are not related to fair housing to appropriate outside organizations. Intake analysts record contacts dealing with fair housing as “inquiries” in TEAPOTS. The analysts interview complainants and may do other research—for example, property searches and searches of newspaper or corporate records—to see if enough information exists to support filing a formal complaint. This initial process is known as “perfecting” a complaint, although it does not always result in a complaint. In order for a complaint to be perfected, it must: contain the required four elements of a Title VIII complaint: the name and address of the aggrieved party (the person who was allegedly injured by a discriminatory housing practice), the name and address of the respondent, a description and address of the dwelling involved, and a concise statement of the facts leading to the allegation; and satisfy the Act’s jurisdictional requirements that the complainant has standing to file the complaint; that the respondent, dwelling, subject matter of discrimination (e.g., refusal to rent or refusal to sell), and the basis (e.g., race, color, familial status) for alleging discrimination are covered by the Act; and that the complaint has been filed within a year of the last occurrence of the discriminatory practice. Hub directors decide whether these conditions are met. If so, the inquiry becomes a perfected complaint; otherwise, it is dismissed. Intake analysts record key information about perfected complaints in TEAPOTS, have complainants sign the official complaints, and send letters of notice about the complaint and the enforcement process to both complainants and respondents. The complaint file is then usually delivered to the investigator. According to FHEO headquarters staff, the intake stage for a complaint that will be investigated by FHEO—rather than a FHAP agency—is usually considered complete when the complaint file is delivered to a FHEO investigator. For such complaints, FHEO’s Title VIII Intake, Investigation, and Conciliation Handbook (Handbook) establishes a timeliness benchmark of no more than 20 days for the intake stage. However, FHEO also performs intake for inquiries that, because of their characteristics, are ultimately referred to a FHAP agency for investigation. For example, if a person alleges a discriminatory practice that is within the jurisdiction of a FHAP agency, FHEO intake analysts complete the intake stage, file the complaint, and refer the case to the FHAP agency. For such complaints, the Handbook establishes a timeliness benchmark of no more than 5 days for the intake stage. During the investigation stage, FHEO investigators collect evidence to determine whether reasonable cause exists to believe that a discriminatory housing practice has occurred or is about to occur. Similarly, FHAP agencies may collect evidence to determine if a local or state fair housing law has been violated. The Handbook provides guidance for investigators during this process, although the Handbook notes that investigations will vary (see table 1). According to agency guidance, once an investigator completes an investigation, the appropriate hub director reviews the results and makes a determination of whether reasonable cause exists to believe that a discriminatory housing practice has occurred or is about to occur. With the concurrence of the relevant HUD’s regional counsel, the hub director issues a determination of reasonable cause and directs the regional counsel to issue a “charge” on behalf of the aggrieved person. The charge is a short written statement of the facts that led FHEO to the reasonable cause determination. If the hub director decides that no reasonable cause exists to believe that a discriminatory housing practice has occurred; then, upon concurrence of the regional counsel, the hub director dismisses the complaint. In a March 6, 2003, memorandum, HUD’s Office of General Counsel (OGC) in headquarters requested that regional counsels send OGC’s Office of Fair Housing the final draft of any charge that they propose to file and that they not file charges until they have received a response from OGC’s Office of Fair Housing. At any stage before the investigation is complete, the enforcement process can end by either conciliation or administrative closure. FHEO’s Handbook states that conciliation is the process by which FHEO “assists the complainant and respondent in achieving a just and mutually acceptable resolution of the disputed issues in a Title VIII complaint.” The Act requires that HUD try to conciliate all complaints to the extent feasible, starting at the time the complaint is filed and continuing until the final charge is issued or the case is dismissed. The Handbook and federal regulations implementing the Act allow an individual to act as investigator and conciliator on the same case, but the regulations state that generally investigators will not conciliate their own cases. Instead, other investigators not investigating a complaint conciliate such complaints. Conciliation agreements are to seek to protect the public interest in furthering fair housing through various provisions, such as requiring the respondent to provide FHEO with periodic reports. FHEO may also close complaints administratively for several reasons—for example, if the complainant withdraws the complaint. The regulations implementing the Act require FHEO and the FHAP agencies to complete an investigation, conciliate a case, or otherwise close a complaint within 100 days of the filing date unless doing so is “impracticable.” An investigation is considered complete, and the 100-day deadline ends, when a hub director makes a cause or no-cause determination in which the regional counsel concurs. If the investigation cannot be completed within 100 days, FHEO must notify the complainant and the respondent in writing of the reasons for the delay. This written notification is called the 100-day letter. Once a determination of reasonable cause has been made and a charge has been issued, an independent fact finder determines whether the respondent has in fact violated the Act (FHAP agencies also use independent fact finders to make this determination). HUD’s regulations state that OGC must file charges with the Office of Administrative Law Judges within 3 business days. When the complainant and respondent receive notice of the charge, each has 20 days to decide whether to have the case heard in a federal district court or by an ALJ. The complainant may intervene as a party, and the complainant and the respondent may be represented by a lawyer before the ALJ. The Act also requires that the ALJ hearing begin within 120 days of the date of the charge, unless impracticable, and that the ALJ decision be issued within 60 days of the end of the ALJ hearing, unless impracticable. If the ALJ determines that no discrimination has occurred, the case is dismissed. If the ALJ determines that discrimination has occurred, he or she is authorized to award injunctive or other equitable relief, economic and noneconomic damages, and civil penalties, as applicable. Any party adversely affected by the ALJ’s decision may appeal it to the HUD Secretary and then to the appropriate appellate court, within certain time frames. HUD and any person entitled to relief under the final decision may petition the appropriate court of appeals to have the final decision enforced. If either party elects to go to federal district court after the charge is issued, the HUD Secretary must authorize a civil action in federal district court, and the U.S. Attorney General must undertake the action on the complainant’s behalf. The complainant may participate and be represented by a lawyer in this court proceeding. The respondent may also choose to be represented by counsel. Any party adversely affected by the final court decision may file a petition in the appropriate appellate court. Some practices for handling fair housing complaints varied significantly among the FHEO and FHAP agencies we visited, and officials noted that certain practices had helped them expedite or improve the quality of the enforcement process. For example, some FHEO offices and FHAP agencies used experienced investigators during the intake stage, while others did not. Some officials at locations that used experienced investigators said that this practice had improved the quality of intake and decreased the overall length of the enforcement process. The variation in enforcement practices between FHEO and FHAP agency locations is not surprising, given the freedom those offices have to administer the enforcement process. In fact, there is a potential for the variation to be even greater than we observed, as we visited only 3 of the 10 hubs and just 7 of the 100 FHAP agencies. Even this limited look revealed practices in some locations that could potentially expedite cases if adopted elsewhere. However, HUD has not performed a systematic nationwide review of the enforcement practices at all of these various locations to identify practices with such potential. We found two personnel practices that officials at some FHEO and FHAP agency locations believed had improved their enforcement processes. First, several locations we visited used experienced investigators during their intake processes, while others generally did not. Although all three hubs we visited used dedicated intake analysts rather than current investigators to handle intake responsibilities, two hubs used some former investigators as intake analysts. Several FHAP agencies we visited had no dedicated intake analysts. At these agencies, current investigators simply shared the intake of complaints. Some FHEO officials told us that using investigators for intake improved the thoroughness of intake and decreased the overall length of the enforcement process. Some officials said that investigators have a better understanding of the information needed for jurisdiction and investigations, and they thus focus their intake efforts on getting that information. Second, one FHAP agency we visited had instituted a team approach to enforcement. The agency had changed its entire enforcement process in 1997 to incorporate this approach, using several teams consisting of “civil rights representatives” (as opposed to intake analysts and investigators) and a “coach attorney.” Teams handled the enforcement process starting with the initial contact and finishing up with the reasonable cause recommendation. Teams rotated through the intake function for 1 week each month, investigating all cases that originated in intake that week. Although this FHAP agency made other changes simultaneously with the change to the team approach, FHAP agency officials said that the team approach had helped its backlog of cases drop significantly and improved the quality of its enforcement process. It is not possible to isolate the team approach’s impact on the FHAP agency’s fair housing effort, and the complaint numbers provided by the agency included other civil rights enforcement work, such as enforcement of equal employment opportunity laws. However, FHAP agency officials told us that, after the team approach had been fully implemented, the average complaint processing time fell from 476 days to 335 days. In addition to personnel practices, we found that one FHAP agency was using a software system to improve the intake procedure. In addition to using TEAPOTS, this particular FHAP agency, in conjunction with a software company, had developed Contact Management System (CMS) software that had significant extra capabilities. The CMS generated a series of initial intake questions for the FHAP agency’s civil rights representative to ask during intake and then constructed follow-up questions based on the answers to the previous questions. These follow-up questions reflected the elements that would be necessary to prove discrimination in a given case. At the end of its approximately 2-hour intake process, the FHAP agency tried to have either a perfected complaint or a reason that the contact did not warrant a perfected complaint. A FHAP agency official told us that the CMS software had helped decrease the length and improve the thoroughness of its enforcement process. Again, it is not possible to isolate the impact of the CMS software, but after the software was installed, average complaint processing times for the FHAP agency’s fair housing and other civil rights work decreased from 335 days to 252 days. We observed numerous variations in investigative practices among the FHEO and FHAP agencies we visited. In several locations, officials said that their specific practices had helped them expedite the process, improve the quality of the process, or both. First, some locations involved attorneys earlier and more frequently during the investigation than other locations. Second, some FHEO offices and FHAP agencies simultaneously investigated and conciliated complaints, while others delayed the investigation while conciliating. Third, one hub and one FHAP agency customarily used separate persons to investigate and conciliate a complaint, while at other hubs and FHAP agencies, a single person handled both of these tasks. Fourth, some enforcement locations employed a tool called a “bubble sheet” to help meet the 100-day requirement for completing investigations. Last, one FHAP agency used software that provided additional investigative tools that TEAPOTS did not provide. At the FHEO offices and FHAP agencies we visited, investigators and attorneys interacted to different degrees, and several officials told us that greater interaction had resulted in shorter and more thorough investigations. For example, at one hub the regional OGC had weekly meetings with investigators at the same location and biweekly meetings with investigators at other offices in the region. Interaction at another hub was less formal, but both regional OGC attorneys and the investigators said that frequent and meaningful interaction occurred on most cases through the informal “open-door” approach. At a third hub, OGC attorneys were not yet formally interacting with investigators, although they had recently signed a memorandum of understanding to do so. At FHAP agencies, we saw similar variations. One FHAP agency, as mentioned earlier, had a “coach attorney” on each team to help from the earliest stages of the investigations. At other FHAP agencies we visited, investigators had more limited interaction with the FHAP agency attorney. In our survey of the 10 hub directors, 5 said that involving OGC in investigations had a great or very great impact on investigations, improving thoroughness, decreasing length, or both. Officials cited various reasons for this result, including that the interactions with OGC: reduced the amount of work wasted on aspects of a case that should not receive investigative attention, shortening investigations; reduced the amount of additional work involved in seeking attorney concurrence, decreasing the length of investigations; helped the investigators pursue the appropriate leads at the best times during an investigation, increasing thoroughness; and created more cooperation among complainants and respondents, as the parties believed that attorneys were more involved in the enforcement process. “significant involvement at complaint intake, in determinations of jurisdiction, in investigative plan development, in conducting investigations, in the effort to resolve cases informally through conciliation, and in making determinations of reasonable cause.” That memorandum also required each regional counsel and each FHEO hub director to enter into working agreements with each other to formalize their working relationships. As of November 24, 2003, every hub had those agreements in place, and one HUD official said that the new memorandum of understanding had resulted in improved communication between investigators and OGC. Some HUD locations we visited put investigations on hold when conciliation looked likely, while others did not. Some fair housing officials at the locations that simultaneously investigated and conciliated told us that doing so not only expedited the enforcement process but could also facilitate conciliation. Because the parties were aware that the investigation was ongoing, two hub directors told us they were sometimes more willing to conciliate. Additionally, some officials at the offices that delayed the investigation while attempting conciliation told us that this practice increased the number of calendar days necessary to investigate a case. However, one hub official told us that simultaneous investigation and conciliation could waste resources, as it might not be necessary to obtain further evidence in a case that would be conciliated. Overall, 6 of the 10 hub directors told us that simultaneous investigation and conciliation had a great or very great impact on the length of the enforcement process, and all 6 said that the practice decreased the length. Four directors said that the practice had a great or very great impact on the thoroughness of investigations, and these four told us that it increased the thoroughness of investigations. Investigators at some FHEO locations and FHAP agencies customarily conciliated their own cases, while other locations usually used separate investigators and conciliators. Officials we spoke with were divided on the impact of this practice. Some officials told us that having the same person performing both tasks had not caused problems. Other officials— including some at locations where investigators conciliated their own cases—indicated a preference to have different people perform these tasks. One official said that separating these tasks enabled simultaneous conciliation and investigation of a complaint, a practice that speeded up the process. Another official noted that parties might share information with a conciliator that they would not share with an investigator and that a conflict of interest might result if one person tried to do both. The same official said that although investigators were not allowed to use information they learned as conciliators during investigations, the information could still influence the questions conciliators posed—and thus the information they learned—as investigators. Similarly, at one hub an OGC official told us that information learned as a result of conciliation efforts should not be included in investigative findings. A few enforcement officials at locations that did not separate the tasks said that they did not have enough staff to have separate conciliators. One hub director said that a FHAP agency in its region was experimenting with a separate mediation track in addition to the conciliation mechanism. The mediation occurred early in the process and involved a professional, nongovernment mediator. The director said the mediation had usually pleased the parties, resulting in timely resolutions of cases and beneficial results. In responding to the 100-day requirement, several hubs and FHAP agencies used variations of what they called a “bubble sheet”-—a list of investigative milestones and a time line for completing them-—in order to meet the 100- day requirement. If an investigator missed a milestone, the “bubble burst,” and the investigator might not meet the 100-day requirement. Some officials said that the bubble sheet helped investigators complete each of the small steps of the investigation in a timely fashion and thus increased the likelihood of compliance with the 100-day requirement. Nevertheless, some people said that the 100-day requirement was arbitrary and often unattainable, and their response to the 100-day requirement was simply to send the 100-day letter at the appropriate time. As in the intake stage, the CMS software used at one FHAP agency offered additional tools during investigations that TEAPOTS did not. The CMS generated interview questions for investigations based on the information obtained in intake and then generated a list of critical documents that were usually needed for certain types of investigations. According to FHAP agency officials, the CMS improved the quality of investigations and decreased the length of cases. One FHEO center we visited was attempting to store possible witness questions in a central database for investigators to review to see if any were applicable to their cases, but this system was not automated and relied on investigators to compile the list. Officials at that center hoped that having a central location for all such questions would give investigators at least some examples of possible questions to ask. Officials at the FHAP agency noted that some data they are required to enter into TEAPOTS duplicated information in CMS and indicated that it would be preferable not to enter this information twice. Another FHAP agency we visited that used other software in addition to TEAPOTS had begun a pilot program to alleviate this duplication, using a program that would allow information entered into TEAPOTS to be incorporated into the FHAP agency software without keying the data again. We did not observe any significant variations across agencies in the adjudication stage of the enforcement process, possibly in part because the hubs and FHAP agencies we visited had adjudicated very few cases through their administrative processes. For example, one hub and one FHAP agency we visited told us they did not have any cases that had gone through the administrative hearing process over the last 5 years. Officials at the FHAP agency told us that in the rare cases that could go to an administrative proceeding, the FHAP agency encouraged parties to opt for state court, since otherwise the FHAP agency would have to commit resources to the process. Agency officials said that steering parties to one forum is inconsistent with the enforcement framework of the Act and the neutral role FHEO and FHAP agencies should play with respect to forum selection. The variations among hubs, centers, and FHAP agencies are not surprising, given the discretion FHEO locations and FHAP agencies have had to administer the enforcement process. While FHEO’s Handbook contains significant guidance, policies, and procedures, FHAP agencies have not been required to follow them. Rather, FHAP agencies must meet certain performance standards to obtain or maintain certification as substantially equivalent agencies. Under these standards, FHAP agencies must have engaged in timely, comprehensive, and thorough fair housing complaint investigation, conciliation, and enforcement activities. For both FHEO locations and FHAP agencies, the variations we observed could be even greater, given our small sample. Additionally, according to the 2001 National Council on Disability report, variations in the hubs’ practices had increased since 1996. Similarly, the potential for variations in FHAP agencies’ practices has likely grown with the number of FHAP agencies, which increased from 64 at the start of fiscal year 1996 to 100 at the start of fiscal year 2004. Many FHEO hub directors indicated that practical improvements could be made to the enforcement process; in fact, at least four directors believed that practical improvements could be made to each stage. Several hub directors provided specific ideas for improvements to the intake stage. One hub director said that her hub had recently written its own intake handbook and had set a requirement of completing intake within 15 days, rather than the 20 days specified in FHEO’s Handbook. Five of the 10 directors said that improvements could be made to the investigation stage for FHEO that would reduce the length of the process to a great or very great extent. One director specifically mentioned a practice—mediation in the early stages of the complaint process—that was in place at FHAP agencies in his region. Additionally, 4 of the 10 directors said that practical improvements could be made to the investigation stage that would increase the thoroughness of the enforcement process to a very great extent. For example, several directors suggested either increasing OGC’s staff to provide more assistance to investigators or putting a non-OGC attorney on staff at the hub or field level as a resource for the investigators. Additionally, one hub director said that a checklist she had recently developed for supervisors reviewing investigations should increase the thoroughness of investigations. Regarding the adjudication stage, one hub director said that the region was concerned about not knowing whether DOJ would accept a fair housing case if a party in the case elected to have it heard in federal district court. Despite the existing differences in practices among the entities involved in enforcing the Act and officials’ belief that some practices could be improved, HUD has not performed a systematic nationwide review of its or FHAP agencies’ enforcement practices since 1996. The 1996 review, a business process redesign, focused on FHEO’s practices, although one FHAP agency was represented in the process. FHEO uses other reviews for practices in its offices, such as Quality Management Reviews (QMR), in part as peer reviews that allow collaboration and information sharing between FHEO offices. Additionally, FHEO reviews cases FHAP agencies investigate to determine if the agencies are eligible for payment under the program. However, the QMRs and FHAP agency reviews are not systematic, nationwide reviews of the practices that FHEO and FHAP agencies are using. Our analysis of FHEO data on fair housing enforcement activities from fiscal year 1996 to 2003 revealed a number of trends. We found that: The number of claims or inquiries FHEO received annually remained stable until 2002 but then increased substantially. The number of complaints filed trended downward in the earlier years but then rose steadily. An increasing proportion of these complaints alleged discrimination on the basis of handicap, while the most frequently cited basis of discrimination—race—declined as a proportion of all complaints. While the number of investigations completed fell in 1997 and 1998, more investigations were completed in each subsequent year. FHAP agencies rather than FHEO conducted most of the investigations. The outcomes of investigations changed over the period, with an increasing proportion of investigations closed without finding reasonable cause to believe discrimination occurred. The frequency with which FHEO and FHAP agencies completed investigations within 100 days increased over the period. The trend data we present are reported on a fiscal year basis. We could not measure the volume of claims and inquiries before 1996. Generally, FHEO treated all inquiries it received between 1989 and 1994 as complaints, regardless of whether the contact alleged a violation of the Act. During parts of 1994 and 1995, FHEO did not collect information on those inquiries that did not result in an investigation. From 1996 until 2002, FHEO’s annual numbers of claims and inquiries alleging violations of the Act varied only slightly, averaging about 4,600 per year, but rose to more than 5,400 in 2003 (fig. 2). Because FHEO does not require FHAP agencies to report the number of claims and inquiries received during this period, we could not determine the number of claims and inquiries received by FHAP agencies. The combined number of complaints perfected and filed declined slightly from 1996 until 1998, but then began increasing steadily (fig. 3). By 2003, the number of complaints filed annually had risen to more than 8,000, with FHAP agencies responsible for investigating the largest share. Of the 53,866 complaints filed during the period, FHAP agencies were responsible for investigating 67 percent, and FHEO was responsible for investigating 33 percent. Overall, FHAP agencies were responsible for investigating an increasing portion of complaints filed each year from 1998 until 2003. In part, these increases may be attributable to the growth in the number of FHAP agencies nationwide. Seven states (Arkansas, Illinois, Maine, Michigan, New York, North Dakota, and Vermont), Washington, D.C., and 26 localities created FHAP agencies between 1996 and 2003 (fig. 4). FHAP agencies were responsible for investigating an increasing number of complaints filed between 1998 and 2003 in all except the Denver region (Region 8). In comparison, four FHEO regions—Boston, Chicago, New York, and Philadelphia—were responsible for investigating a declining number of complaints filed during this period. As the number of complaints filed rose between 1996 and 2003, the basis of, or reasons for, the alleged discrimination changed somewhat (fig. 5). First, although complaints alleging discrimination based on race continued to dominate, accounting for around 40 percent of the total, the annual percentage declined slightly over the period. The share of complaints alleging discrimination based on familial status declined from one-quarter to about one-sixth of complaints filed during the period. In contrast, complaints alleging discrimination based on handicap increased significantly, rising by more than 13 percentage points to become the second most frequently cited basis of complaints. Complaints alleging discrimination on the basis of religion, national origin, and retaliation also grew somewhat, while those alleging discrimination because of sex and color declined. The subject matter, or issue covered by the Act, of complaints also changed from 1996 through 2003. Most of the complaints filed alleged discriminatory terms, conditions, or privileges (e.g., refusal to repair, charging an inflated rent) or refusal to rent, but the share of these complaint issues fell over the period from a high of about 63 percent and 36 percent, respectively, in 1996 to 55 percent and 23 percent in 2003 (fig. 6). At the same time, the share of complaints alleging failure to make reasonable accommodation or modification rose significantly, from less than 1 percent in 1996 to 16.5 percent in 2003. Complaints alleging a single issue represented about 68 percent of complaints filed during that period, while complaints alleging more than one issue represented the remaining 32 percent. While the volume of complaints filed nationwide grew during the period, two regions, Denver (Region 8) and Seattle (Region 10) saw a decline (table 2). Conversely, two regions saw substantial increases. Specifically, complaints filed in Kansas City (Region 7) doubled during the period and almost tripled in the New York region (Region 2). The increases may be attributable, in part, to the addition of FHAP agencies from 1996 through 2003. By November of 1999, the New York region had two FHAP agencies online. In fiscal year 2000, the number of complaints filed in the New York region had more than doubled, rising from 213 to 442 complaints, or 6.3 percent of all complaints filed. FHEO referred 337 of these complaints to the FHAP agencies for investigation. Investigations may be completed in several ways, each leading to a particular outcome. First, an investigation is considered complete when it is closed administratively—for example, the complainant withdraws the complaint or staff are unable to locate the complainant. Second, a FHEO- conducted investigation may be considered complete when the complaint is transferred to DOJ because of FHEO’s agreement to do so in certain instances, such as in cases involving criminal activity or pattern and practice issues. Third, FHEO or the FHAP agency may complete the investigation through conciliation with the parties, or the parties may settle among themselves. Fourth, FHEO or the FHAP agency may determine that reasonable cause may exist to believe that a discriminatory housing practice has occurred (find cause). Finally, FHEO or the FHAP agency may determine that there is not reasonable cause (no cause). The number of investigations completed annually during the period rose after falling significantly in 1997 through 1998 (see fig. 7). This pattern was similar for both FHAP agencies and FHEO, though the number of investigations completed by FHAP agencies declined in 2003 and the number of investigations completed by FHEO declined in 2002. The most frequent outcome of investigations completed during the period was a determination that there was no reasonable cause to believe that discrimination had occurred (see fig. 8). The share of investigations resulting in this outcome rose from just over 40 percent in 1996 to around 48 percent in 2003. Conversely, the share of investigations completed through successful conciliation or settlement declined somewhat during the period, but this outcome remained the second most frequent—about one-third of all investigations completed during the period. A determination of reasonable cause accounted for the smallest share of outcomes, around 5 percent of all completed investigations. TEAPOTS does not have a code specifically indicating that an investigation was completed with a finding of reasonable cause, but does provide for a date on which cause was found. We used this date to measure the number of investigations completed with a finding of reasonable cause. According to a HUD official, FHEO hubs do not record cause dates in TEAPOTS consistently. Specifically, at least two hubs may initially record the date the case is transferred to the regional counsel, rather than the date of the issuance of a determination of reasonable cause with which the regional counsel has concurred. These hubs then enter a new date when the regional counsel concurs and a charge of discrimination is issued. Therefore, the number of investigations that we report as completed during each fiscal year with a finding of reasonable cause may not match the number of charges that HUD reports, particularly for fiscal year 2003. We sorted HUD’s data on outcomes by basis of complaint, subject matter, and region. Our analysis revealed the following: The percentage of no-cause determinations varied somewhat according to the basis of discrimination alleged. Above-average proportions of investigations that involved religion, retaliation, and race ended in no- cause determinations, (55, 53, and 54 percent respectively, compared with 47 percent overall). Similarly, 41 percent of investigations involving familial status and 40 percent of investigations involving handicap as at least one of the bases for discrimination ended in conciliation or settlement, compared with 32 percent overall. Outcomes also differed by the subject matter, or issue involved. A greater proportion of investigations that resulted in a no-cause finding had discriminatory terms or refusal to rent as an issue (61 and 30 percent, respectively). Conversely, however, relatively few complaints determined to have no cause involved refusal to sell or noncompliance with design and construction as issues (4 and 1 percent, respectively). Regional differences were also apparent in outcomes. Investigations completed in the Atlanta region (Region 4), for instance, were more likely to end in no-cause determinations—53 percent—than investigations in any other region. Similarly, investigations completed in the Denver region (Region 8) were more likely to end in conciliation or settlement. Finally, the overall percentage of investigations completed with a reasonable cause determination varied widely among regions, from as high as 10 percent in the Boston region (Region 1) to as low as 1 percent in the Fort Worth region (Region 6). Complaint investigations that resulted in a determination of reasonable cause generally proceeded to the adjudication stage. Because of TEAPOTS data limitations, we were not able to determine the final resolutions (that is, the reasons for closing the cases, including decisions on whether or not an actual violation of the Act had occurred) of all complaints that reached the adjudication stage. Specifically, as table 3 shows, for 8 percent of investigations in which FHEO made a determination of reasonable cause and 30 percent of investigations in which a FHAP agency made a similar determination, information on the reason for closure was missing in TEAPOTS. For the remaining FHEO and FHAP agency investigations (those for which the reason for closure was available), we identified the following: The independent fact finder found that discrimination had occurred in about 3 percent of the FHEO cases and 7 percent of the FHAP agency cases. About one-third of all cases (FHEO and FHAP agency) resulted in a judicial consent order—that is, the parties negotiated a settlement, either alone or through an appointed settlement judge, which was submitted to the independent fact finder as a voluntary agreement to resolve the case. Of the FHEO cases, 46 percent were closed when the parties elected to go to court, about 6 percent resulted in conciliation or settlement, 2 percent resulted in administrative closure, about 1 percent resulted in judicial dismissal, and in less than 1 percent the independent fact finder found that no discrimination occurred. Of the FHAP agency cases, the independent fact finder dismissed 16 percent, 9 percent resulted in conciliation or settlement, 4 percent were closed administratively, and 4 percent resulted in a finding that no discrimination occurred. The numbers of investigations completed within 100 days by both FHEO and the FHAP agencies increased significantly after 2001 (fig. 9). Some of the improvement in the number of FHEO investigations completed in 100 days may have been the result of an initiative aimed at reducing the number of aged cases in FHEO’s inventory. FHEO undertook the initiative in 2001 after completing only 14 percent of its investigations within the 100-day timeframe in 2000. The number completed in 100 days rose in 2002, to 41 percent of all investigations. At the same time, the number of FHAP agency investigations meeting the 100-day benchmark remained fairly stable (23 to 33 percent) over the period 1996 to 2003, rising most markedly from 2002 to 2003 by more than 30 percent. In January 2004, FHEO established monthly efficiency goals aimed at monitoring the progress of the hubs in meeting both the 20-day intake and 100-day investigative timeliness benchmarks. It is too soon to determine what effect this initiative might have on the timeliness of investigations. While data were generally available to measure the length of both FHEO’s and FHAP agencies’ investigations, we found that reliable data were lacking on the intake and adjudication stages handled by FHAP agencies. First, HUD does not require FHAP agencies to report on intake activities, and FHAP agencies accounted for intake on 42 percent of all complaints filed in fiscal years 1996 through 2003. Second, while TEAPOTS contained data on the dates that inquiries were received for investigations completed by FHAP agencies, we question the reliability of these data. According to a FHEO official responsible for TEAPOTS, FHEO staff may have routinely used the date on which cases were transferred to FHAP agencies as the “initial inquiry” date. In addition, TEAPOTS data show that 20,226 (54 percent) of complaints investigated by FHAP agencies were filed the same day that the inquiries were received—that is, the intake stage began and ended on the same date. Third, HUD does not require FHAP agencies to report the results of the adjudication of closed investigations. Accordingly, for many complaints investigated by FHAP agencies that reached the adjudication stage, TEAPOTS did not show an end date for adjudication. Finally, TEAPOTS was missing these dates for some complaints investigated by FHEO as well. Using the data that were available, we measured the typical length of each stage using medians—that is, the number at the exact midpoint of the range of days required to complete each stage (or where there was an even number of observations, the mean or average of two at the midpoint). TEAPOTS data indicate a median of 12 days for the intake stage (from the date of the initial inquiry to the date the complaint was filed) for cases handled by FHEO in 1996 through 2003. The data showed that 35 percent of complaints investigated by FHEO were filed the same day that the claims or inquiries were received (that is, the intake stage began and ended on the same date), 28 percent within 20 days, and 31 percent within 21 days to 3 months of the date that the claim or inquiry was received. FHEO’s new monthly efficiency reports aim to, among other things, monitor the hubs’ progress in completing the intake process within 20 days. The median number of days for investigations (from the date the complaint was filed to the date the investigation was completed) was 259 for complaints investigated by FHEO (fig. 10).The median number of days varied somewhat, depending on the outcome of the investigations (e.g., administrative closure, finding of reasonable cause, finding of no cause, etc.). FHEO completed 61 percent of its investigations within a year of the date the complaint was filed. We could not measure the time required to adjudicate all cases for which FHEO found cause. Specifically, of the 719 investigations for which FHEO determined that reasonable cause existed to believe that discrimination had occurred, TEAPOTS included data on 339 cases that were adjudicated within HUD. In these cases, the median time required to complete the adjudication process was 203 days. In an additional 330 cases, one or both parties elected to have their complaints heard in district court at the cause date, or shortly thereafter. For these individuals the enforcement process continued, but FHEO did not record the length of the judicial process. TEAPOTS also did not have information on adjudication for 50 cases for which FHEO found cause. We found that numerous factors affected the length and thoroughness of the enforcement process. First, hub directors we surveyed said that the characteristics of complaints—certain issues, for example, and the presence of multiple bases—could increase the time needed for investigations, reduce thoroughness, or both. Second, both hub directors and FHEO and FHAP agency officials said that specific practices could make investigations shorter and more thorough. Third, hub directors and other officials pointed to human capital issues as potentially increasing the length and decreasing the thoroughness of investigations, including staff shortages, low skill levels, lack of training and guidance, and inadequate travel resources. Finally, hub directors noted that national performance goals could reduce the number of aged cases but had little effect on timeliness or thoroughness. Most hub directors stated that the issue of a complaint had a great or very great effect on the amount of time required to complete the enforcement process. Complex issues such as refusing to provide insurance or credit could add time to investigations. For example, investigators might have to analyze statistics to determine if the complainant was treated differently from the norm. According to one director, some issues, such as failure to make reasonable accommodation, could require time for staff to conduct time-consuming on-site visits. In addition, some directors thought that complaints involving multiple issues could take longer to investigate. Agency data tend to support some of the directors’ observations (fig. 11). While investigations took a median of 211 days to complete during the period, the median for investigations involving discriminatory lending was 295 days; and for noncompliance with design and construction issues, it took 284 days. However, the median for investigations involving reasonable accommodation or modification was just 162 days. Although directors generally did not believe that any particular prohibited basis had an effect on the length of investigations and thus on the enforcement process, one director did note that complaints involving multiple bases would likely increase the length of the enforcement process. We found that the median number of days required to complete investigations involving multiple bases was slightly higher than for single- issue investigations—217 days. The median number of days FHAP agencies needed to complete an investigation varied more across specific prohibited bases than it did for FHEO. For FHEO, the median length of investigations ranged from a low of 223 days (handicap) to 302 days (retaliation) (see fig. 12). For FHAP agencies, the median length of investigations ranged from a low of 175 days (handicap) to 211 (sex). Most directors stated that the volume of complaints received in their region had a great or very great effect on the length of the enforcement process. According to one director, a large volume of complaints created competing demands on staff time. Another director noted that the volume of complaints could lengthen the enforcement process if staff resources were in short supply. Half (5) of the hub directors also believed that the volume of inquiries and complaints had a great or very great effect on the thoroughness of the enforcement process. One respondent noted that complex issues in a complaint or large volumes of complaints in a region might decrease the thoroughness of the process if resources were strained, staff were not adequately skilled to accommodate the amount or level of difficulty of the work to be done, or both. Fewer directors (2 out of 10), said they believed that the basis of complaints had a great or very great effect on the thoroughness of the process. One director noted that, regardless of the factors involved, the thoroughness of the enforcement process should never be compromised. As we have seen, some HUD and FHAP agency officials identified two intake practices that they believe shortened the enforcement process, increased its thoroughness, or both. The first—involving investigators in intake—was cited by 4 of the 10 HUD hub directors that responded to our survey. Further, officials at the FHAP agency that used the team intake and investigation approach noted that it had led to better investigations that were conducted in less time. The second practice—using CMS enforcement software—was credited by the FHAP agency that used it with facilitating both timeliness and thoroughness. HUD and FHAP agency officials also cited several investigative practices that increased the thoroughness or decreased the length of the enforcement process or both. First, several HUD officials said that early and frequent OGC involvement was important to increasing the thoroughness of investigations. Second, some enforcement officials said that simultaneous conciliation and investigation might decrease the length of investigations. Third, some HUD hub directors said that using TEAPOTS affected the length and thoroughness of the process to a great or very great extent: specifically, 6 hub directors indicated that using the system increased the thoroughness of the enforcement process, decreased the length of the process, or both. Some officials, however, also told us that TEAPOTS could be improved, and one FHAP agency’s CMS software offered an alternative system that the FHAP agency credited with reducing the length of its process and improving its thoroughness. The CMS software provides investigators more sophisticated tools than TEAPOTS offered for planning and conducting investigations. Finally, one hub official said that alternative mediation at the outset of the complaint process could help decrease the length of some complaint investigations. FHEO officials and others we interviewed identified human capital management challenges that had negatively affected the fair housing enforcement process, including the number and skill levels of FHEO staff, the quality and effectiveness of training, and other issues. FHEO officials told us that hiring freezes had left a number of FHEO offices with chronic staffing shortages, especially among supervisors and clerical workers and that these shortages had never been fully resolved. The shortages affected not only enforcement of the Act, but also FHEO’s other responsibilities, forcing managers to assume heavier caseloads and professional staff to perform administrative duties rather than concentrating on enforcement. Hub directors told us that hiring activity in the last 3 years had at least partially abated the chronic staffing shortages. However, they added that FHEO now faces the prospect of losing staff because a corrective action plan requires that FHEO, consistent with HUD’s key workforce planning effort, have fewer employees than it currently has. As figure 13 shows, the total number of full-time equivalents (FTE) in FHEO has fluctuated over the last 10 years, falling from a high of 750 in fiscal year 1994 to a low of 579 in fiscal year 2000. In 2003, FHEO’s FTEs rose once more to 744 after a concerted hiring initiative, although the workforce effort mentioned above suggested a level of 640. Currently, FHEO faces the challenge of meeting a mandatory ceiling of 640. FHEO comprised about 6 percent of HUD’s total workforce until fiscal year 2002 and 7 percent in 2003, when FHEO directors received hiring authority for new staff. FHEO staff have other responsibilities beyond enforcing Title VIII, including monitoring program compliance by housing providers receiving federal funds, performing Fair Housing Initiatives Program (FHIP) grant management, monitoring FHAP agencies, providing technical assistance, and performing education and outreach activities. FHEO hired 167 staff beginning in July 2002 as part of a departmental effort to reach its requested ceiling by September 30, 2002. That is, HUD was attempting to reach 9,100 FTEs at the end of fiscal year 2002, a number that would equal the approved fiscal year 2002 FTE level and the requested fiscal year 2003 level. FHEO’s hiring initiative, like HUD’s overall, was not in line with the department’s workforce planning efforts. The most important of these, the Resource Estimation and Allocation Process (REAP), a series of department studies conducted from 2000 through 2002, to assess HUD’s staffing requirements, recommended a total FTE ceiling for FHEO of 640. As a result of HUD’s hiring initiative, HUD had a staffing level of 9,395 at the beginning of fiscal year 2003—295 above the approved fiscal year 2002 and requested fiscal year 2003 levels. Therefore, HUD was forced both to reprogram more than $25 million to cover the costs of the newly hired excess staff and to submit to Congress a corrective action plan consistent with REAP. HUD’s Strategic Placement Plan, issued in January 2004, would reduce FHEO’s excess staff to the mandated level of 640 FTEs by the end of fiscal year 2004 through voluntary and, if necessary, involuntary reassignments. However, as of February 2004, FHEO remained at 727 FTEs, and FHEO officials told us they did not know how they would meet the mandated level on schedule. The officials also expressed concern that they would lose many of their best staff through the voluntary reassignment plan. Officials expressed concern not only with the insufficient number of staff but also the lack of staff at key positions. Some HUD managers said that due to unfilled supervisor positions in their regions, existing supervisors were not able to review materials as carefully as they could have with those positions filled. For example, one center director told us that investigators did not get supervisory input on initial investigative plans due to a vacant supervisory post. This center director said that the gap in supervision decreased the thoroughness and sometimes increased the length of investigations, as existing supervisors were unable to complete work in a timely manner. Some hub directors and other officials we spoke with cited concerns about the noncompetitive reassignment of staff into FHEO. They noted that the level of staff skills could influence the length and thoroughness of the enforcement process and that the reassignment process had a generally negative impact on FHEO’s overall skill levels. According to these officials, while many of the reassigned staff had worked at HUD for years, their skills were often not transferable to FHEO activities, which require specific analytical, investigative, and writing skills. Some directors cited the skills issue as a greater problem for FHEO than the actual numbers of personnel. FHEO’s own internal review also cites concerns about reassigned employees’ qualifications, skills, and work products and about the amount of time and supervision these employees require. FHEO documentation shows that 106 staff were reassigned to the program under various HUD realignments from 1998 to 2002. Figure 14 shows the numbers of staff in the three hubs we visited by their years of experience with FHEO and reassignment status. Although FHEO has brought many new staff on board recently through competitive hiring, many staff in the hubs we reviewed came to the organization via noncompetitive reassignment. Although figure 14 shows that more than half of the FHEO staff currently located in the three sites we visited had fewer than 10 years of experience with FHEO, many have a significant number of years of federal service. Figure 15 shows a snapshot of the same FHEO employees in the three sites we visited by their years of federal service. The figure demonstrates that half of the FHEO employees in the three sites we visited have 20 or more years of federal service, and 14 percent have 30 or more years of federal service. Retirement eligibility was an issue not only for the three sites we visited, but also for FHEO as a whole. Officials expressed concern about the loss of skilled and experienced staff to retirement, and personnel data provided by the HUD human resources staff show that 40 percent of FHEO employees overall were eligible for either early or immediate retirement in February 2004 (see fig. 16). Moreover, as we have noted previously, officials that we spoke with also expressed concern that current plans to eliminate a significant number of FHEO staff by voluntary reassignment could cause skilled workers to leave FHEO and seek opportunities elsewhere. Providing effective training is another human capital challenge that FHEO faces. Half of the directors told us that the quality and effectiveness of training helped reduce the length of the fair housing enforcement process, and six said that it improved thoroughness. For example, some directors said that training serves to expedite investigations as staff gain more technical skills. Other directors said that training improves thoroughness because staff can recognize issues of discrimination and decide what evidence is needed to support complaints. We heard concerns from FHEO staff, an ALJ, and others outside of HUD about the quality or availability of training for FHEO employees. Most staff we spoke with reported that they had received initial formal training for their positions, though not always in a timely fashion. A list of courses supplied by the HUD Training Academy, which provides the majority of formal training for FHEO, showed that the basic course in investigation had been offered annually in all but 1 of the last 5 years. However, depending on the hiring date, a new staff member might have to wait 1 year or more to attend the basic course. Potentially compounding this problem, hub directors told us that although training was available in fiscal year 2003, lack of travel funds sometimes prevented them from sending staff to training out of the area. Finally, budget data show that although FHEO had initial approval from the HUD Training Academy to spend $416,000 for training in fiscal year 2003, the HUD Training Academy reduced FHEO’s training funds to $200,000 as part of the department’s overall efforts to reduce expenditures to cover the cost of excess staff hiring. FHEO recognized the need for additional training by establishing the HUD Fair Housing Training Academy, which is slated to open in the summer of 2004. FHEO officials told us that they hope to standardize what the agency believes are uneven fair housing processes and practices implemented around the nation by FHEO and its FHAP agency partners, create a more professional group, and possibly reduce turnover rates at FHAP agencies by certifying attendees. Initially, however, the academy will serve staff from only FHAP agencies, not FHEO employees. Officials explained that FHAP agency funds would cover the costs of this initial training. FHEO’s human capital challenges are symptomatic of those facing HUD as a whole. FHEO, like the department and other federal agencies, is experiencing significant challenges in deploying the right skills in the right places at the right time, is facing a growing number of employees who are eligible for retirement, and is finding it difficult to fill certain mission- critical jobs—a situation that could significantly drain its institutional knowledge. We have observed that federal agencies need effective strategic workforce planning to identify and focus on the long-term human capital issues that most affect their ability to attain mission results. We identified five key principles that strategic workforce planning should address, which include (1) involving top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan; (2) determining the critical skills and competencies that will be needed to achieve current and future programmatic results; (3) developing strategies that are tailored to address gaps in number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies; (4) building the capability needed to address administrative, educational, and other requirements important to support workforce strategies; and (5) monitoring and evaluating the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic goals. In developing strategies to address workforce gaps, we reported that agencies should, among other things, consider hiring, training, staff development, succession planning, performance management, use of flexibilities, and other human capital strategies and tools that can be implemented with resources that can reasonably be expected to be available. We have reported in the past that HUD had not done the strategic workforce planning necessary to address its human capital challenges. Like HUD, FHEO does not have a comprehensive strategic workforce plan to help it meet key human capital challenges. REAP, which estimates the staff needed to handle HUD’s workload in each of its offices, does not include the extensive analysis involved in a comprehensive assessment. However, since we last reported, HUD contracted a technical adviser to conduct a comprehensive workforce analysis. Such an assessment would cover current workforce skills, anticipated skill needs, current and future skill gaps, and needed training and development that will be used to develop a comprehensive 5-year departmental workforce plan. Additionally, HUD plans to rollout over the next 3 years a customized human resources and training information system known as the HUD Integrated Human Resources and Training System (HIHRTS). HUD documentation says that the system will replace several legacy systems; will integrate all human resource information into one platform, making information available to managers for strategic planning and employee development; and helping ensure that HUD employees are used effectively. Officials from headquarters and the sites we visited also told us that inconsistencies in the amount and availability of travel funds impaired the length and thoroughness of the fair housing enforcement process. As mentioned previously, in May 2003, Congress approved the reprogramming of funds within HUD to cover the cost of excess staff hiring, including a $7.7 million reduction in travel funds. FHEO officials told us that following this reprogramming, they had no travel funds for up to 6 months, preventing investigators from making timely visits to the sites of complaints. Budget data show that FHEO has experienced larger decreases in travel funds than HUD as a whole. From fiscal year 2002 to 2003, HUD’s allotment for travel decreased by 12 percent. FHEO’s travel allotment, however, decreased by 17 percent over the same period. Directors reported that interruptions in travel funds in fiscal year 2003 had impeded efforts to plan and manage investigations. Directors also told us that uncertainties regarding the department’s ultimate annual appropriations amount had forced headquarters to limit travel funds at the beginning of fiscal years and prevented them from establishing a firm annual travel budget. Without this budget, directors said, they could not plan for the travel that would have helped reduce the length of investigations. Directors reported using several methods to stretch their travel funds, including curtailing and delaying travel, limiting the time investigators could spend in the field, catching up on needed travel when funds became available at the end of the fiscal year, reducing travel for FHEO’s other responsibilities outside of fair housing enforcement, and asking investigators from offices closer to the site of the complaint to assist with the investigation. Some investigators told us that they had used their own vehicles or funds for site visits and conducted desk investigations. At the same time, budget data show that hub directors’ routine meetings consumed an increasing share of FHEO’s travel budget from fiscal year 2001 to 2003. Director’s meetings utilized 13 percent of FHEO’s travel expenditures of approximately $900,000 in fiscal year 2003. Hub directors we visited told us that while FHEO’s national performance goals have helped reduce the number of aged cases, these goals have had a negligible impact on the thoroughness of the fair housing enforcement process and could create competing demands for staff time. Performance reports show that the percentage of aged fair housing complaints for HUD nationwide has declined steadily since fiscal year 2000, exceeding the national goals in fiscal year 2001 through 2003. For example, in fiscal year 2003, the national goal was a 25 percent maximum for aged cases and FHEO achieved 23 percent. However, there are no national goals that directly relate to the thoroughness of investigations or the fair housing enforcement process. Regardless, some directors told us that although they strive to meet performance goals, they are more motivated by the statute’s 100-day benchmark and the need to provide good customer service. Directors also cited a tension between the need to meet the 100-day benchmark and the simultaneous need to conduct a thorough investigation and said that at times one goal cannot be achieved without some cost to the other. One director stated that while mindful of the 100-day benchmark, she would not close a case to meet the time limit unless she felt that the investigation had been thorough. Directors told us that the existence of overall performance goals for FHEO could exacerbate the problem of competing demands. For example, annual goals routinely set achievement targets in FHEO’s area of responsibility outside of Title VIII enforcement, including program compliance review, monitoring FHIP and FHAP agencies grantees, increasing the number of substantially equivalent agencies, and providing training on accessibility and handicap rights. The time and resources needed to meet these targets could increase the challenges involved in meeting Title VIII commitments in a timely and thorough manner. The fair housing enforcement process provides a framework for considering complaints of housing discrimination. However, persons who have experienced alleged discrimination in housing can sometimes face a lengthy wait before their complaint is resolved. Because flexibility is built into the process, enforcement practitioners have devised a variety of practices for processing inquiries and complaints, some of which could improve the timeliness and thoroughness of investigations. Our limited look at enforcement operations at FHAP agencies and FHEO centers within 3 of FHEO’s 10 regions revealed practices that could potentially expedite cases if they were adopted elsewhere. Further, many FHEO hub directors told us they believed that every stage of the fair housing enforcement process could be improved. However, practitioners may be unaware of such practices because FHEO has not taken steps to identify those practices that hold the promise of improving the fair housing enforcement process. Because of data limitations—specifically, data that are of questionable reliability, missing, or not currently collected—FHEO does not know how much time individuals face from the day they make an inquiry to the day they learn the outcome of their cases, particularly when FHAP agencies handle the investigation. Without comprehensive, reliable data on the dates when individuals make inquiries, FHEO cannot judge how long complainants must wait before a FHAP agency undertakes an investigation. Similarly, without comprehensive, consistent, and reliable data concerning the dates that complaints are finally decided, HUD cannot determine how long the intended beneficiaries of the Act typically wait for a decision. Data that provide a comprehensive view of the enforcement process from start to finish for both FHEO and FHAP agencies could help HUD target problem areas and improve management of the enforcement process. TEAPOTS provides a platform that FHEO and FHAP agencies may use for recording these key enforcement data. FHEO’s human capital challenges serve to exacerbate the challenge of improving enforcement practices. Human capital management issues at both HUD and FHEO are an immediate concern. FHEO’s planned reduction in staff and other human capital factors may affect its ability to enforce fair housing laws. To meet such challenges, HUD managers will need to continue their efforts to analyze workforce needs and to develop a workforce planning process that makes the best use of the department’s most important resource—the people that it employs now and in the future. A comprehensive strategic workforce planning process that builds on the five principles that we have observed at other federal agencies will help FHEO and other departmental programs identify and focus their investments on the long-term human capital issues that most affect the agency’s ability to achieve its mission. To improve the management and oversight of the fair housing enforcement process, we recommend that the HUD Secretary direct the Assistant Secretary of FHEO to take the following 4 actions: establish a way to identify and share information on effective practices among its regional fair housing offices and FHAP agencies; ensure that the automated case-tracking system includes complete, reliable data on key dates in the intake stage of the fair housing enforcement process for FHAP agencies; ensure that the automated case-tracking system includes complete, reliable data on key dates in the adjudication stage of the fair housing enforcement process for both FHEO and FHAP agencies; ensure that the automated case-tracking system includes complete, reliable data on the outcomes of the adjudication stage of the fair housing enforcement process for FHEO and FHAP agencies; and ensure that hubs enter cause dates into the automated case-tracking system in a consistent manner. Further, we recommend that the Secretary take the following action: In developing HUD’s 5-year Departmental Workforce Plan, follow the five key principles discussed in this report. As part of the comprehensive workforce analysis, ensure that HUD fully considers a wide range of strategies to make certain that FHEO obtains and maximizes the necessary skills and competencies needed to achieve its current and emerging mission and strategic goals with the resources it can reasonably expect to be available. We provided a draft of this report to HUD for its review and comment. We received written comments from the department’s Assistant Secretary for Fair Housing and Equal Opportunity. These comments, which are included in appendix IV, indicated general agreement with our conclusions and recommendations. The Assistant Secretary noted that FHEO has already begun to take steps to improve the quality and timeliness of the fair housing enforcement process. Specific planned actions that are consistent with our recommendations include (1) implementing a new Business Process Redesign review; (2) establishing a reporting requirement addressing post- cause results; and (3) enhancing, in conjunction with the department, FHEO’s efforts at workforce analysis. The Assistant Secretary commented that FHEO would take a close look at all of the report’s recommendations. HUD’s comments also included several suggestions to enhance clarity or technical accuracy. We revised the report to incorporate these suggestions and have included them in this report where appropriate. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies of this report to the Chair of the Senate Committee on Banking, Housing and Urban Affairs; the HUD Secretary; and other interested congressional members and committees. We will make copies available to others upon request. In addition, this report will also be available at no charge on our Web site at http://www.gao.gov. Please contact me or Mathew J. Scirè at (202) 512-6794 if you or your staff have any questions concerning this report. Key contributors to this report were Emily Chalmers, Rachel DeMarcus, Tiffani Green, M. Grace Haskins, Marc Molino, Andrew Nelson, Carl Ramirez, Beverly Ross, and Anita Visser. Our engagement scope was limited to fair housing investigations conducted under Title VIII of the Civil Rights Act of 1968, as amended, rather than fair housing activities under Section 504 of the Rehabilitation Act of 1973 or Title VI of the Civil Rights Act of 1964. To describe the fair housing enforcement process, we reviewed the legislation, regulations, and the Office of Fair Housing and Equal Opportunity’s (FHEO) guidance for intake, investigation and adjudication of fair housing complaints. We also interviewed officials at FHEO headquarters who are responsible for oversight and policymaking. In addition, we conducted site visits and structured interviews with key FHEO and Fair Housing Assistance Program (FHAP) agency officials, including FHEO hub, FHAP agency, and center directors; intake staff; investigators and attorneys. We selected 3 of the 10 FHEO hubs and 8 of the 18 centers for site visits (table 4). We selected the hub sites on the basis of (1) the number of “aged” cases within the region, (2) the total number of complaints received, (3) the ratio of FHEO investigations to all investigations, and (4) the number of organizational components—that is the number of centers and offices within the hub. We ranked each hub on the basis of whether they were among the 3 hubs with the highest values, the 3 with the lowest values or the 4 hubs with the middle values for the dimensions we measured. We also visited at least one FHAP agency in each of the selected hub regions. To describe the trends in FHEO data on the numbers, characteristics, outcomes, and length of fair housing investigations, we used data from FHEO’s automated case-tracking system (TEAPOTS). Specifically, we obtained data on inquiries and claims made and investigations completed as of September 2003, for each fiscal year from 1996 through 2003. Using these data, we computed the following: number of inquiries and claims made, number of complaints filed, number and outcome of investigations completed, percentage of investigations completed within 100 days, and median length of each enforcement stage. For the purposes of measuring the percentage of investigations completed within 100 days, we measured the time elapsed between the most recent of the date filed, date reopened, or date reentered and the date the investigations were either transferred to the Department of Justice, closed administratively, conciliated or settled, found to have reasonable cause, or found not to have reasonable cause. To assess the reliability of TEAPOTS data we used, we examined (1) the process FHEO and FHAP agencies use to capture and process inquiry and complaint information and (2) the internal controls over the TEAPOTS database that store and retrieve this information. We interviewed the system’s managers, reviewed documentation of and reports produced by the system, compared some of our results to summary reports previously produced by FHEO, and performed basic reasonableness checks on TEAPOTS data. Missing values and fields, inconsistencies between fields, and out-of-range values in fields were infrequent and did not pose a material risk of error in our analysis. We concluded that the data we analyzed were sufficiently reliable for the purposes of this report. However, we encountered several limitations in the TEAPOTS data that prevented us from using them to fully describe the trends in the numbers, characteristics, and outcomes of fair housing investigations. Because of indications that TEAPOTS data may either be incomplete or inconsistent regarding the dates that inquiries were made, and the dates that an independent fact finder ultimately determined that discrimination did or did not occur, we were unable to provide complete information on one of our report objectives. Specifically, we were unable to report on the average time taken by two phases of the enforcement process for cases handled by FHAP agencies. In attempting to determine the average time needed to complete each stage of the fair housing enforcement process, we relied on data from TEAPOTS. Specifically, we obtained TEAPOTS data on complaint investigations completed from 1996 through 2003 by FHEO and FHAP agencies and attempted to measure (1) the time elapsed between a complainants’ first contact with either the FHEO or a FHAP agency and the date that the complaint was filed; (2) the time elapsed between filing a complaint and completing an investigation; and (3) the time elapsed between completing an investigation and the final disposition, the end of the adjudication process. Because of inconsistent intake data and missing adjudication data, we were unable to determine the average time that had been required to complete the first and last stages of the complaint process for cases handled by FHAP agencies. (p. 3-44), this term covers race, color, religion, sex, national origin, familial status, and handicap. The U.S. General Accounting Office (GAO) is currently reviewing HUD’s Fair Housing Enforcement efforts. As part of this review, we have visited a number of Fair Housing and Equal Opportunity (FHEO) offices to talk with enforcement management and staff. For all questions, please consider the conditions in your entire HUB region, including centers and sites. To ensure the broadest coverage, GAO is now conducting this survey of all 10 regional FHEO HUB Managers. The purpose of this survey is to identify the factors that HUD fair housing enforcement practitioners believe impact the length and thoroughness of the Title VIII fair housing enforcement process including intake, investigation, and adjudication. In the survey, we use the following terms: Length: The amount of time that elapses between the date a Title VIII complaint is received at HUD as an inquiry and the date that the complaint is resolved (e.g., administrative closure, conciliation, adjudication through ALJ hearing, or other means). During the interview, we will ask you to read and discuss your answers, providing examples to the extent possible. Thoroughness: The extent to which accurate and complete evidence is collected and analyzed to enable staff (investigators, attorneys, etc.) to recommend and make the appropriate resolution. If you have any questions about this survey or the GAO study, please contact _____________ at ____________ or e-mail her at: _______________ Thank you for your participation. Subject matter/issue: As used in the HUD Title VIII Investigations Handbook (p. 3-24), subject matters and issues include items such as rentals, sales, lending, and redlining. Prohibited basis of discrimination: As used in the HUD Title VIII Investigations Handbook 1. To what extent do the following factors influence the amount of time it takes to complete the Title VIII fair housing enforcement process? We understand that many things can affect the length of the process. However, we ask that when responding to each specific factor, you hold all others constant and check the box that comes closest to your “best answer.” Check one box for each row. Human Capital Management & Planning performance management goals set for your Region Director’s Elements set for you as a HUB Director L. Workforce analysis (Alignment of staff skill with mission accomplishment) N. Please specify. (Type in the shaded blank below.) 2. To what extent do the following factors influence the ability of your staff to thoroughly complete the Title VIII fair housing enforcement process? We understand that many things can affect the thoroughness of the process. However, we ask that when responding to each specific factor, you hold all others constant and check the box that comes closest to your “best answer.” Check one box for each row. L. Workforce analysis (Alignment of staff skill with mission accomplishment) N. Please specify. (Type in the shaded blank below.) 3. To what extent do the following enforcement practices impact the overall length of the Title VIII fair housing enforcement process (or to what extent would they impact the length of the process, if your office does not practice them)? We understand that many things can affect the length of the process. However, we ask that when responding to each specific practice, you hold all others constant and check the box that comes closest to your “best answer.” Check one box for each row. Page 5 of 8 conduct investigations while in conciliation 4. To what extent do the following enforcement practices impact the overall thoroughness of the Title VIII fair housing enforcement process (or to what extent would they impact the thoroughness of the process, if your office does not practice them)? We understand that many things can affect the thoroughness of the process. However, we ask that when responding to each specific practice, you hold all others constant and check the box that comes closest to your “best answer.” Check one box for each row. Page 6 of 8 5. To what extent could practical improvements be made to each of the following Title VIII activities that would reduce the amount of time required to complete the entire process? Please consider any ideas or practices that differ from HUD's current enforcement process and that, with proper funding and training, would improve the overall length of the process. You will have an opportunity to share these ideas and practices during our follow-up interview. Check one box for each row. B. Investigation stage (including the conciliation process)6. To what extent could practical improvements be made to each of the following Title VIII activities that would improve the overall thoroughness of the entire process? Please consider any ideas or practices that differ from HUD's current enforcement process, and that, with proper funding and training, would improve the overall thoroughness of the process. You will have an opportunity to share these ideas and practices during our follow-up interview. Check one box for each row. B. Investigation stage (including the conciliation process)7. Please describe any noteworthy practices that your office uses in each stage of the Title VIII fair housing enforcement process. (If you are using the Word version of this survey, please type your answers in the shaded blanks.) Investigation Stage (including the conciliation process) Some hub directors may have defined the investigation and adjudication stages differently than other directors. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Discrimination in housing on the basis of race, sex, family status, and other grounds is illegal in the United States. Each year, the Department of Housing and Urban Development's Office of Fair Housing and Equal Opportunity (FHEO) and related agencies carry out enforcement activities for several thousand complaints of housing discrimination. The timeliness and effectiveness of the enforcement process have been continuing concerns. GAO describes the stages and practices of the fair housing enforcement process, looks at recent trends, and identifies factors that may influence the length and thoroughness of the process. The current fair housing enforcement process provides a framework for addressing housing discrimination complaints. Both FHEO and Fair Housing Assistance Program (FHAP) agencies located around the country take inquiries about potential incidences of discrimination and conduct investigations to determine whether discrimination did in fact occur. The practices used during intake and investigation differ among FHEO and the FHAP agencies, as the state and local agencies have some discretion in determining which practices work best for them. As a result, some agencies have developed procedures that they said improved the quality of intake and made investigations easier. For example, some FHAP agencies use experienced investigators during the intake process to help clients develop formal complaints. To date, FHEO has not looked at such practices to determine if they should be disseminated for potential use at other locales. Further, individuals alleging discrimination in housing sometimes face a lengthy wait to have their complaints investigated and decided. Although the law sets a benchmark of 100 days to complete investigations into complaints of discrimination, FHEO and the FHAP agencies often do not meet that deadline. The typical time to complete an investigation in 1996 through 2003 was more than 200 days, with some investigations taking much longer. However, a lack of data makes it impossible to assess the full length and outcomes of fair housing enforcement activities. For example, because FHAP agencies are not required to report intake data to FHEO, complete information is not available on the number of initial contacts individuals alleging discrimination make with FHAP agencies. A lack of data on the ultimate outcomes of some investigations conducted by both FHEO and FHAP agencies may also prevent FHEO from fully measuring the time that complaints face before cases are ultimately decided. Human capital management challenges, such as ensuring adequate numbers of trained staff, further affect FHEO's ability to carry out its mission in a timely manner. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Selected provisions of federal law explicitly prohibit specific categories of drug offenders from receiving certain federal benefits for specified periods. Table 1 identifies key provisions of federal law that provide for denial of benefits specifically to drug offenders and the corresponding benefits that may or must be denied to drug offenders. Except for federal licenses, procurement contracts, and grants under Denial of Federal Benefits Program, the benefits that may or must be denied are benefits that are generally provided to low-income individuals and families. TANF, food stamps, federally assisted housing, and Pell Grants are low-income programs. The Denial of Federal Benefits Program, established under Section 5301 of the Anti-Drug Abuse Act of 1988, as amended, provides that federal and state court judges may deny all or some of certain specified federal benefits to individuals convicted of drug trafficking or drug possession offenses involving controlled substances. Additional details on each of the programs may be found in appendices II, III, IV, and V. The provisions differ on key elements. For example, they establish different classes of drug offenders that may or must be denied benefits, and they provide for different periods that drug offenders are rendered ineligible to receive a benefit and whether or not benefits can be restored. Some of the provisions allow that drug offenders may become eligible for benefits upon completing a recognized drug treatment program. Provisions established by the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), as amended, which govern the TANF and food stamp programs, provide that benefits must be denied to persons convicted of a state or federal felony drug offense that involves the possession, use, or distribution of a controlled substance that occurred after August 22, 1996, the effective date of these provisions. Students become ineligible to receive federal postsecondary education benefits upon a conviction of either a misdemeanor or a felony controlled substances offense. Loss of federally assisted housing benefits can occur if individuals, relatives in their household, or guests under a tenant’s control engage in drug-related criminal activity, regardless of whether the activity resulted in a conviction. Local public housing authorities, which administer federally assisted housing benefits, have discretion in determining the behaviors that could lead to loss of certain federal housing benefits. Under the Denial of Federal Benefits Program, judges in federal and state courts may deny a range of federal licenses, contracts, and grants to persons convicted of controlled substances drug trafficking and drug possession offenses. The period of ineligibility to receive benefits varies. Under PRWORA, as amended, unless states enact laws that exempt convicted drug offenders in their state from the federal ban, TANF and food stamp benefits are forfeited for life for those convicted of disqualifying drug offenses. State laws may also result in a shorter period of denial of these benefits. Students are disqualified from receiving federal postsecondary education benefits for varying periods depending on the number and type of disqualifying drug offense convictions. A first conviction for possession of a controlled substance, for example, results in a 1-year period of ineligibility, while a first conviction for sale of controlled substance results in a 2-year period of ineligibility. Upon subsequent convictions, the period of ineligibility can extend indefinitely. Federally assisted housing benefits may also be denied for varying periods of time, depending upon the number and types of drug-related criminal activities. The minimum loss of benefit is 3 years in certain circumstances, and the maximum is a lifetime ban. For example, for persons convicted of certain methamphetamine offenses, the ban is mandatory and for life. Under the Denial of Federal Benefits Program, the denial of certain other types of benefits by judges, such as federal grants and contracts, can range from 1 year to life depending on the type of offense and number of convictions. In some cases, the period of benefit ineligibility may be shortened if offenders complete drug treatment. For example, students may have their postsecondary education benefits restored if they satisfactorily complete a drug treatment program that satisfies certain criteria and includes two unannounced drug tests. Under the Denial of Federal Benefits Program, the denial of benefits penalties may, for example, be waived if a person successfully completes a drug treatment program. Other than offenders who were convicted of methamphetamine offenses, drug offenders that successfully complete drug treatment may receive federally assisted housing benefits prior to the end of their period of ineligibility. In states that have passed laws so specifying, drug offenders may shorten the period of ineligibility for TANF and food stamp benefits by completing drug treatment. (See table 2.) The legislative history of these provisions is silent as to whether they were intended to do more than provide for denying federal benefits to drug offenders, such as deterring drug offenders from committing future criminal acts. For example, our 1992 report indicated that in the floor debates over the Denial of Federal Benefits Program, some members of Congress expressed the opinion that even casual drug use should result in serious consequences, such as the loss of federal benefits. With respect to prohibiting drug offenders from public housing, congressional findings made in 1990 and amended in 1998 address the extent of drug-related criminal activities in public housing and the federal government’s duty to provide public and other federally assisted low-income housing that is decent, safe, and free from illegal drugs. TANF, food stamps, federally assisted housing, and Pell Grants are means- tested benefits. To receive the benefits, individuals must meet certain eligibility criteria. These criteria vary with the benefit. For instance, states determine maximum earned income limits for TANF, but to receive food stamps, the federal poverty guidelines are generally used in determining eligibility. To receive federally assisted housing, local area median income is used. Additionally, most adults eligible for TANF and some adults eligible for food stamps must meet specified work requirements to participate in the programs. Table 3 summarizes the general eligibility requirements for the federal benefits discussed in this report and identifies the federal, state, and local agencies responsible for administering the programs. Not all persons who meet the general eligibility requirements to receive federal benefits participate in the respective programs. Our recent study on programs that aim to support needy families and individuals shows that the portion of those eligible to receive the benefits that actually enrolled in the programs varied among programs. Among families eligible to participate in TANF in 2001, between 46 percent and 50 percent were estimated to be participating in the program. For food stamps in 2001, between 46 percent and 48 percent of eligible households were estimated to participate in the program. For federally assisted housing, between 13 percent and 15 percent of eligible households in 1999 were estimated to be covered by the Housing Choice Voucher (HCV) Program and between 7 percent and 9 percent of eligible households in 1999 were estimated to be covered by the Public Housing Program. Further, the Department of Education reports that among all applicants for federal postsecondary education assistance in academic year 2001-2002, about 77 percent of the applicants that were eligible to receive Pell Grants applied for and received them. Drug offenders would be directly affected by the federal provisions that allow for denial of low-income federal benefits when, apart from their disqualifying drug offense, they would have qualified to receive the benefits. For example, if a drug offender is not in a financially needy family and living with her dependent child, the drug offender would not be eligible for TANF benefits aside from the drug offense conviction. To be directly affected by the ban on food stamps, a drug offender would have had to meet income tests and work requirements, unless the work requirements are, under certain specified circumstances, identified as not applicable by federal food stamp laws; otherwise, the offender’s ineligibility to receive the benefit would disqualify him, as opposed to his drug offense. Because the ban on the receipt of TANF and food stamps is for life, an offender who is not otherwise eligible to receive the benefits at one point in time might become otherwise eligible to receive the benefits at a later point in time and at that time be affected by the provisions of PRWORA. To be otherwise eligible to receive federal postsecondary education assistance, a person convicted of a disqualifying drug offense would, at a minimum, have to be enrolled in or accepted at an institution of higher education, as well as meet certain income tests. To be otherwise eligible for federally assisted housing benefits, a person would have to meet income tests. We estimated that among applicants for federal postsecondary education assistance, drug offenders constituted less than 0.5 percent on average of all applicants for assistance during recent years. In general, the education attainment level of drug offenders is lower than that of the general population, and this lower level affects drug offenders’ eligibility for federal postsecondary assistance. Among selected large PHAs that reported denying applicants admission into public housing during 2003, less than 5 percent of applicants were denied admission because of drug- related criminal activities. PHAs have discretion in developing policies to deny offenders for drug-related criminal activities. Federal and state court sentencing judges were reported to impose sanctions to federal benefits to fewer than 600 convicted drug offenders in 2002 and 2003, or less than 0.2 percent of felony drug convictions on average. According to Department of Education data on applicants for federal postsecondary education assistance for the academic years from 2001- 2002 through 2003-2004, less than 0.5 percent on average of the roughly 11 million to 13 million applicants for assistance reported on their applications that they had a drug offense conviction that made them ineligible to receive education assistance in the year in which they applied. These numbers do not take into account the persons who did not apply for federal postsecondary education assistance because they thought that their prior drug convictions would preclude them from receiving assistance or any applicant who falsified information about drug convictions. Using these data and Department of Education data on applicants that received assistance for the academic years 2001-2002 through 2003-2004, we estimated that between 17,000 and 20,000 applicants per year would have been denied Pell Grants, and between 29,000 and 41,000 would have been denied student loans if the applicants who self-certified to a disqualifying drug offense were eligible to receive the benefits in the same proportion as the other applicants. (See app. III for details on our methods of estimating these figures.) In general, the educational attainment levels of persons convicted of drug offenses is less than that of persons in the general population. This results in proportionately fewer persons eligible for these education benefits than in the general population. Our analysis of data from the only national survey of adults on probation that also reports on their educational attainment indicates that among drug offenders on probation during 1995, less than half had completed high school or obtained a general equivalency degree (GED)—prerequisites for enrolling in a postsecondary institution. By comparison, according to a Bureau of Justice Statistics report, about 18 percent of adults in the general population had less than a high school degree. More recent data from the U.S. Sentencing Commission on roughly 26,000 drug offenders sentenced federally during 2003 indicate that half of them had less than a high school degree, about one-third had graduated from high school, and about 18 percent had at least some college. In addition, our analysis of BJS data on drug offenders released from prisons in 23 states during 2001 indicate that about 57 percent of these drug offenders had not completed high school by the time they were admitted into prison; about 36 percent had completed high school or obtained a GED as their highest level of education completed; and the remainder had completed some postsecondary education. We obtained data from 17 of the largest PHAs in the nation on the decisions that they made to deny federally assisted housing benefits to residents or applicants during 2003. Thirteen of the 17 PHAs reported data on both (1) the number of leases in the Public Housing Program units that they manage that ended during 2003 and (2) the number of leases that were terminated for reasons of drug-related criminal activities. These 13 PHAs reported terminating leases of 520 tenants in the Public Housing Program because of drug-related criminal activities. The termination of a lease is the first step in evicting tenants from public housing. Tenants whose leases were terminated for reasons of drug-related criminal activities constituted less than 6 percent of the 9,249 leases that were terminated in these 13 PHAs during 2003. Among these PHAs, the percentage of terminations of leases for reasons of drug-related criminal activities ranged from 0 percent to less than 40 percent. These PHAs also reported that the total number of lease terminations in 2003 and the volume of denials for drug-related activities were generally comparable with similar numbers for the 3 prior years. (See app. IV for data for each PHA that responded to our request for information.) Fifteen PHAs acted on 29,459 applications for admission into the Public Housing Program during 2003. Among these applicants seeking residency, we estimated that less than 5 percent were denied admission because of their drug-related criminal activities. The PHAs also reported that they acted on similar numbers of applicants and made similar numbers of denial decisions in the prior 3 years. Table 4 shows the data on lease terminations and denials of admission in two federally assisted housing programs. We also obtained and analyzed data from HUD on the number of evictions and denials of admission into public housing during fiscal years 2002 and 2003 that occurred for reasons of criminal activity, of which drug-related criminal activity is a subset. More than 3,000 PHAs reported to HUD that in each of these years there were about 9,000 evictions for reasons related to criminal activity and about 49,000 denials of admission for reasons of criminal activity. As a percentage of units managed by these PHAs, evictions for reasons of criminal activity in each of these years amounted to less than 1 percent of units managed, and denials of admission amounted to about 4 percent of units managed. Evictions and denials for reasons of drug-related criminal activities would have to be equal to or, much more likely, less than these percentages. On the basis of data that 9 PHAs were able to report about terminating participation in the HCV Program during 2003, we estimated that less than 2 percent of the decisions to terminate assistance in the HCV program (of the roughly 12,700 such decisions) were for reasons of drug-related criminal activities. In addition, 9 PHAs reported that they acted on 21,996 applications for admission into the HCV Program and that less than 1.5 percent of applicants were denied admission for reasons of drug-related criminal activities. Local PHAs that administer federally assisted housing benefits have discretion in determining whether current tenants or applicants for assistance have engaged in drug-related criminal activities that disqualify them from receiving housing benefits. HUD requires PHAs to develop guidelines for evicting from or denying admittance into federally assisted housing to individuals who engage in drug-related criminal activity. A November 2000 HUD study on the administration of the HCV Program described the variation in PHA policies on denying housing to persons who engaged in drug-related criminal activities. HUD concluded that because of the policy differences, some PHAs could deny applicants who could be admitted by others. For example, some PHAs consider only convictions in determining whether applicants qualify for housing benefits, while others look at both arrests and convictions. Some look for a pattern of drug-related criminal behavior, while others look for evidence that any drug-related criminal activities occurred. In addition, among PHAs, the period of ineligibility for assistance arising from a prior eviction from federally assisted housing because of drug-related criminal activities ranged from 3 to 5 years. (See app. IV for a summary of selected PHA policies.) Any imbalance between the supply of and demand for federally assisted housing may also affect whether drug offenders are denied access to this benefit. The stock of available federally assisted housing units in the Public Housing Program is generally insufficient to meet demand. PHAs may have long waiting lists, up to 10 years in some cases, for access to federally assisted housing. As PHAs generally place new applicants at the end of waiting lists, a drug offender who might be disqualified from federally assisted housing but who applies for housing assistance could go to the end of a PHA’s waiting list. Until that applicant moved to the top of the waiting list, the limited supply of federally assisted housing, and not necessarily a drug offense conviction, would effectively deny the applicant access to the benefit. Between 1990 and the second quarter of 2004, BJA received reports from state and federal courts that 8,298 offenders were sanctioned under the Denial of Federal Benefits Program in federal and state courts. This amounted to an average of fewer than 600 offenders per year. The Denial of Federal Benefits Program provides judges with a sentencing option to deny federal benefits such as grants, contracts, and licenses. About 62 percent of the cases reported to be sanctioned under the Denial of Federal Benefits Program occurred in federal courts, and the remaining 38 percent occurred in state courts. For recent years (2002 and 2003), BJA reported that fewer than 600 persons were denied federal benefits under the program. In 2002, there were more than 360,000 drug felony convictions nationwide. On average, less than 0.2 percent of these convicted drug felons were sanctioned under this program. According to the BJA data, state court judges in 7 states have imposed the sanction, and state court judges in Texas accounted for 39 percent of all cases in which drug offenders were reportedly denied benefits under this program by state court judges. Federal judges in judicial districts in 26 states had reportedly imposed denial of benefits sanctions, and federal judges in Texas accounted for 21 percent of the cases in which federal judges reportedly denied benefits. The pattern of use of sanctions under this program, with substantially more use in some jurisdictions than in others, may indicate that there are drug offenders in some locations who could have received the sanction but did not. (See app. V for more information about this program.) We previously reported on the relatively limited use of this sanction. We reported then that many offenders who could be denied access to federal benefits would also be sentenced to prison terms that exceed the benefit ineligibility period; therefore, upon release from prison, the offenders would not necessarily have benefits to lose. BJA officials reported that as of 2004, about 2,000 convicted drug offenders were still under sanction under the Denial of Benefits Program, as the period of denial had expired for the other sanctioned offenders. Most states have acted on the discretionary authority provided them under federal law to enact legislation that exempts some or all convicted drug felons in their states from the federal bans on their receipt of TANF and food stamps. That is, these state laws allow that convicted drug felons may not be banned for life from receiving TANF and food stamps provided they meet certain conditions. For states that had not modified the federal ban on TANF, we estimated that about 15 percent of all offenders and 27 percent of female offenders released from prison during 2001 would have met selected eligibility requirements and would therefore potentially be affected by the ban. We also estimated that among drug offenders released from prison during 2001 in states that had not modified the federal ban on food stamps about a quarter were custodial parents whose reported income was below federal poverty thresholds for food stamps. While food stamps are not limited to custodial parents, and the ban could affect other drug offenders, we limited our analysis to this group. A total of 32 states have enacted laws that exempt all or some convicted drug felons from the federal ban on TANF benefits. Of these states, 9 have enacted laws that exempt all convicted drug felons from the federal ban, and these persons may receive TANF benefits provided that they meet their state’s general eligibility criteria. Another 23 states have passed laws that exempt some drug felons from the TANF ban. The modifications allow that some convicted drug felons may receive benefits and generally fall into any of three categories: (1) Some states permit felons convicted of drug use or simple possession offenses to continue to receive TANF benefits but deny them to felons convicted of drug sales, distribution, or trafficking offenses; (2) some states allow convicted felons to receive TANF benefits only after a period of time has passed; and (3) some states allow convicted drug felons to receive TANF benefits conditioned upon their compliance with drug treatment, drug testing, or other conditions. (See app. II for the status of states’ exemptions to the TANF ban.) Using state-level data on drug arrests as a proxy for state-level data on drug convictions, we estimated that the 9 states that completely opted out of the TANF ban and exempted all convicted drug felons from the ban accounted for about 10 percent of drug arrests nationally in 2002. The 23 states whose exemptions modified the TANF ban accounted for about 45 percent of drug arrests nationally. For these states with various exemptions, it is difficult to determine to which drug felons the ban might apply, as participation in the program is contingent upon a felon’s behavior (such as abiding with conditions of probation or parole supervision, or participating in drug treatment). Finally, the 18 states that fully implemented the TANF ban accounted for about 45 percent of all drug arrests nationwide. Using Bureau of Justice Statistics survey data on the family and economic characteristics of drug offenders in prison and state-level data on the number of drug offenders released from prison during 2001 in 14 of the 18 states that fully implement the ban on TANF, we estimated that about 15 percent of those released from prison were parents of minor children, lived with their children, and had earned income below the maximum levels permitted by their states of residence. That is, but for the ban, they may have been eligible to receive TANF benefits. We estimated that the majority of drug felons—who are single males and not custodial parents— did not meet these TANF eligibility requirements and would therefore not have been qualified to receive the benefit even in the absence of the provisions of PRWORA. (See app. II for additional information about the methods used to estimate these quantities.) Female drug offenders released from prison in the 14 states constituted about 13 percent of drug offenders released from prison in 2001. We estimated that between 25 percent and 28 percent of these female offenders were parents of minor children who lived with their children and whose incomes were below state thresholds, and therefore stood to lose TANF benefits. This percentage among female drug offenders released from prison is about twice that for males. From the available data, we estimated that less than 15 percent of male prisoners were parents who lived with their children and had earned incomes that would qualify them to receive TANF benefits. Other factors, which we could not take into account to estimate the percentages of drug offenders that could be eligible to receive TANF benefits, include citizenship status and total family income. Noncitizens with fewer than 5 years of residence in the United States are generally ineligible to receive TANF. Several of the states for which we obtained data on drug offenders released from prison have relatively large noncitizen populations. Therefore, among those drug offenders that we estimated could have been eligible to receive TANF benefits might be some ineligible noncitizens. In addition, the data that we used to estimate whether drug offenders met state income eligibility requirements included individual income rather than total family income. It is possible that some prisoners would join family units with incomes above state TANF eligibility earned income limits and would thus be disqualified for benefits. Among the drug offenders released from prison during 2001, the percentage that may be affected by the TANF ban at any time during their lifetimes would be greater than our estimate of those initially affected. This is because at a later date some of these offenders may meet the general eligibility criteria for receiving benefits. Thus, the percentage ever affected by the bans would grow over time. Because of data availability, our estimates focus on convicted drug felons who were in prison. We do not have data to assess the effect of the TANF ban on drug felons who received probation or who were sentenced to time in local jails. According to BJS data, nationwide, about one-third of convicted drug felons are sentenced to probation. Moreover, our estimates apply to the states that fully implemented the ban on TANF. Because of complexities associated with state exemptions to the federal ban and the lack of sufficiently detailed data, we cannot provide an estimate of the percentage of convicted drug offenders who could be affected by the ban in the 23 states that modified the TANF ban. We note, however, that state modifications to the ban may allow convicted drug felons to participate in TANF if they abide by the conditions set in the state exemptions (such as abide by conditions of parole or probation supervision or participate in drug treatment). In these states, unlike in the states that fully implement the federal ban, the post-conviction behavior of offenders would help to determine whether they could receive the benefit. Other state modifications allow drug offenders to receive TANF benefits at some point in the future (such as after completing drug treatment or receiving a sufficient number of negative drug test results). In states that require that drug felons wait before becoming eligible to participate in TANF, the federal ban is in effect until the waiting period ends. We would therefore expect estimates of the percentage affected during the waiting period to be similar to the estimates of the percentage affected in the states that fully implemented the federal ban. At the time of our review, 15 states fully implemented the federal ban on food stamp benefits to convicted drug felons, and 35 states had passed laws to exempt all or some convicted drug felons in their own states from the federal ban on food stamps. Of the 35 states with exemptions, 14 states exempt all convicted drug felons from the food stamp ban, and 21 have laws that exempt some convicted drug felons from the food stamp ban provided that they meet certain conditions. In the 21 states that modified the food stamp ban, the modifications are similar to those for TANF and generally include (1) exempting persons convicted of drug possession from the ban, while retaining it for persons convicted of drug sales, distribution, or trafficking; (2) requiring a waiting period to pass before eligibility is restored; and (3) conditioning food stamp eligibility upon compliance with drug treatment, drug testing, or other conditions. (See app. II for the status of states’ exemptions to the food stamps ban.) States’ decisions to exempt all or some convicted drug felons in their states from the ban on food stamps affect the proportion of drug felons that can be affected by the ban. Using the state-level drug arrest data as the proxy for felony drug convictions (as we did for TANF), we find that the 15 states that fully implemented the ban on food stamps accounted for about 22 percent of all drug arrests nationally. Using data from the BJS inmate survey on the family and economic characteristics of drug offenders in prison and state-level data on the number of drug offenders released from prison during 2001 in 12 of the 15 states that fully implemented the ban on food stamps, we estimated that about 23 percent of those released from prison were parents of minor children whose incomes were below the federal poverty guidelines. Among male drug offenders, we estimated that about 22 percent met these conditions, while among female drug offenders, we estimated that about 36 percent did. We are unable to provide an estimate of the percentage of drug offenders that could be eligible to receive food stamps as able-bodied adults without dependent children. According to USDA, in 2003, this class of food stamp recipients constituted about 2.5 percent of food stamp recipients nationwide. Food stamps are not limited to custodial parents. However, we limited our assessment to custodial parents because of data limitations. Because the denial of food stamps is a lifetime ban, the number of drug offenders affected by the ban will increase over time, as additional convicted drug felons are released from prison. Also, as with the TANF estimates, data limitations precluded our providing estimates for the felony drug offenders that were sentenced to probation in 2001 or for the states that modified the federal ban. A complex array of provisions of federal law allow or require federal benefits be denied to different classes of drug offenders. There is also a good deal of discretion allowed in implementing these laws that can exempt certain drug offenders from their application. Our estimates indicate that denial of benefit laws potentially affect relatively small percentages of drug offenders, although the numbers potentially affected in given years may be large. There are a number of reasons why the percentages affected may be relatively small. First, large numbers of drug offenders would not be eligible for these benefits regardless of their drug offender status. For example, those who lack a high school diploma are ineligible for postsecondary educational loans or grants, and many do not meet eligibility requirement for TANF and food stamps. Also, in the case of TANF and food stamps, the majority of states have used their discretion to either partially or fully lift the ban on these benefits for certain drug offenders. It is important to note that although the overall numbers of drug offenders that could be affected by the TANF and food stamp bans are relatively small in comparison with the numbers of drug offenders, our estimates suggest that the effects of the bans disproportionately fall on female offenders. This is because they are more likely to be custodial parents with low incomes and thus otherwise eligible for the benefits. We provided a draft of this report to the Attorney General; the Secretaries of the Departments of Education, Agriculture, and Housing and Urban Development; the Assistant Secretary of the Administration for Children and Families; the Director of the Office of National Drug Control Policy; the Research Director of the United States Sentencing Commission; and the Director of the Administrative Office of the United States Courts for their review and comment. We received technical comments from the Departments of Justice, Agriculture, and Education, and from the United States Sentencing Commission and Administrative Office of the United States Courts, which we incorporated into the report where appropriate. We are sending copies of this report to the Attorney General; the Secretaries of the Departments of Education, Health and Human Services, Agriculture, and Housing and Urban Development; the Director of the Office of National Drug Control Policy; the Research Director of the United States Sentencing Commission, and the Director of the Administrative Office of the United States Courts. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributors to this report are listed in appendix VI. Federal law provides that certain drug offenders may or must be denied selected federal benefits, such as Temporary Assistance for Needy Families (TANF), food stamps, federally assisted housing, postsecondary education grants and loans, and certain federal contracts and licenses. Our objectives were to analyze and report on two interrelated questions about the number or percentage of drug offenders that could be affected by the provisions: (1) In specific years, how many drug offenders were estimated to be denied federal postsecondary education benefits, federally assisted housing, and selected grants, contracts, and licenses? (2) What factors affect whether drug offenders would have been eligible to receive TANF and food stamp benefits, but for their drug offense convictions, and for a recent year, what percentage would have been eligible to receive these benefits? In addition, we were asked to address the impact of federal benefit denial laws on minorities and the long-term consequences of denying federal benefits on the drug offender population and their families. Because of severe data limitations, we were unable to provide a detailed response to this matter. The final section of appendixes II, III, IV, and V in this report include discussions of the data limitations that precluded us from estimating the impacts on minorities. Where information was available, we also identify in the appendices some of the possible long-run consequences of denial of benefits. We limited our analysis of federal laws to those that explicitly included provisions that allowed for or required the denial of federal benefits to drug offenders. We excluded other provisions that provide for denial of benefits to all offenders, of which drug offenders are a subset. We also excluded from our analysis provisions that applied to offenders only while they are incarcerated and provisions that applied to fugitive felons. Other federal laws relating to drug offenders but not within the scope of our review include provisions such as those making a person ineligible for certain types of employment, denying the use of certain tax credits, and restricting the ability to conduct certain firearms transactions. Further, because of the limited data available on persons actually denied federal benefits, we provide rough estimates of either the number or the percentage of drug offenders affected by the relevant provisions. We provide an overview of these methodologies below but we discuss the specifics of our methodologies for analyzing and estimating the impacts of denying specific federal benefits in appendices II through V. We assessed the reliability of the data that we used in preparing this report by, as appropriate, interviewing agency officials about their data, reviewing documentation about the data sets, and conducting electronic tests. We used only the portions of the data that we found to be sufficiently reliable for our purposes in this report. We conducted our work primarily in Washington, D.C., at the headquarters of five federal agencies—the Departments of Justice (DOJ), Agriculture (USDA), Housing and Urban Development (HUD), Education (ED), and Health and Human Services (HHS)—responsible for administering the denial of federal benefit laws. We also conducted work at the Office for National Drug Control Policy (ONDCP)—which has responsibilities for national drug control policy—the Administrative Office for United States Courts (AOUSC)—which provides guidance to the courts for the implementation of statutory requirements—and the United States Sentencing Commission (USSC)—which has responsibilities for monitoring federal sentencing outcomes. To estimate how many or what percentage of drug offenders were reported to be denied federal postsecondary education and federally assisted housing benefits and certain grants and contracts under the Denial of Federal Benefits Program, we obtained and analyzed data from agency officials. From ED, we obtained data for several years on the number of applicants using the Free Application for Federal Student Aid (FAFSA), the number of these who reported a disqualifying drug offense conviction, the number eligible for Pell Grants, and the number receiving Pell Grants and student loans. We analyzed these data to generate our estimates of the number of those that reported disqualifying drug offenses that would have been eligible to receive Pell Grants and student loans. We also obtained Bureau of Justice Statistics (BJS) data that reported on the educational attainment of nationally representative sample of offenders on probation. We used these data, along with USSC data on sentenced drug offenders and BJS data on drug offenders released from prison, to assess the education levels of drug offenders. To identify factors that could contribute to the number of drug offenders denied federal postsecondary education benefits, we interviewed officials at ED about federal regulations, guidance, and rulings pertaining to the eligibility to receive benefits. Appendix III describes in more detail our methods for estimating the education of those denied education benefits. From a nonprobability sample of some of the largest public housing agencies (PHA) in the United States, we obtained information about reported actions taken in 2003 in these PHAs to deny persons federally assisted housing benefits for reasons of drug-related criminal activities. We selected large agencies because of the volume of actions that they take in a given year and to provide indications of the range of outcomes in PHAs in different settings with different populations. We also obtained and analyzed data from HUD on persons reportedly evicted from or denied admission into public housing for reasons of criminal activities. From selected PHAs, we obtained, analyzed, and compared termination and admissions policies and procedures used during 2003 or 2004 to deny federally assisted housing to persons involved in drug-related criminal activities. We also spoke with staff from selected research organizations, national associations, and PHAs to review the eligibility criteria to receive federal benefits. Appendix IV describes our methods for assessing denials of federally assisted housing. From the Bureau of Justice Assistance (BJA), we obtained data on drug offenders reported to have been denied federal benefits under the Denial of Federal Benefits Program. We spoke with officials at BJA about the current operations and plans to enhance the program, and we interviewed officials from USSC and AOUSC about the operations of this program. We also interviewed ONDCP officials about the array of federal provisions that provide for denial of federal benefits and federal programs that provide for drug treatment for drug offenders. Appendix V describes our methodology for analyzing the Denial of Federal Benefits Program. Data limitations concerning the actual number of persons denied TANF and food stamp benefits required us to develop estimates of the drug offenders that could be denied these benefits in that they had characteristics that would have allowed them to qualify to receive the benefits except for their drug offense convictions. To determine the extent to which drug offenders were otherwise qualified or eligible to receive federal benefits, we identified key elements of the eligibility to receive federal benefits. We met with officials at the federal agencies responsible for administering TANF—the Department of Health and Human Services— and food stamps—the U.S. Department of Agriculture—to discuss issues related to eligibility to receive these benefits. We obtained and analyzed data from BJS on the characteristics of drug offenders in prison, and we applied this information to the number of drug offenders released from prison during 2001 in states that fully implemented the ban on TANF. To determine the current status of states that have opted out of or modified federal provisions banning TANF and food stamp benefits to persons convicted of drug felony offenses, we reviewed state laws and contacted officials at USDA (which annually surveys states about the status of their laws in relation to the ban on food stamps) and state officials in states that have modified the federal ban on TANF or food stamps to discuss the status of their provisions regarding the exemptions under their state laws. Appendix II provides detailed information on our methodology for assessing the TANF and food stamps bans. From the following sources, we obtained, assessed the reliability of, and analyzed data related to denial of federal benefits that we used in developing estimates of the impacts of the federal provisions. To assess the reliability of the data, as needed, we interviewed agency officials about the data systems, reviewed relevant documentation, and conducted electronic tests of the data. We determined that the data were sufficiently reliable for the purposes of this report. The data sources included Bureau of Justice Assistance: Data on the number of drug offenders reported to BJA by state and federal courts as having been denied federal benefits under the Denial of Federal Benefits Program from 1991 to 2004. Bureau of Justice Statistics: Survey of Inmates of State Correctional Facilities in 1997: We used these data to estimate the number of convicted drug felons in prison that were parents of minor children, lived with their children prior to their incarceration, and had incomes within state earned income limits. We used these estimates to assess the impacts of the provisions allowing for the denial of TANF and food stamp benefits. National Corrections Reporting Program, 2001: We used these data to obtain counts of the number of drug offenders released from prison during 2001 in selected states. We also used these data to provide estimates of the level of education completed from drug offenders released from prison during 2001 and in developing our estimates of the impacts of the TANF and food stamps provisions. Survey of Adults on Probation, 1995: We used these data, from the only national source of data on the characteristics of adults on probation of which BJS is aware, to learn about the education levels of drug offenders on probation and in developing estimates of the impact of denying federal postsecondary education assistance. Selected state corrections and court officials: For selected states that fully implemented the ban on TANF and food stamps, we obtained data on the numbers of convicted drug felons released from prison in during 2001. We used these data in developing estimates of the impacts of TANF and food stamps. Department of Housing and Urban Development: We obtained and analyzed data from HUD’s Public Housing Assessment System (PHAS) and Management Operations Certification Assessment System (MASS) for fiscal years 2002 and 2003 on the number of public housing residents evicted because of criminal activities (of which drug-related criminal activities form a subset), and of the numbers denied admission into the Public Housing Program for reasons of criminal activities. Seventeen of the 40 largest PHAs in the nation: We requested information from the 40 largest PHAs about the number of decisions they made during 2003 to deny federally assisted housing to tenants and applicants for reasons of drug-related criminal activities, and we obtained data from 17 of these PHAs. Not all 17 PHAs provided responses to all of our questions; therefore, we reported data only on the PHAs that were able to provide data relevant to the question under review. We selected these PHAs from among the 1,531 PHAs that managed both Public Housing and Housing Choice Voucher (HCV) programs as of August 31, 2004. We asked them for information about denials of federally assisted housing for reasons of drug-related criminal activities, and we also asked them to provide these data based on the race of tenants and applicants. HUD does not collect this information. We used these data in describing the number of persons denied federally assisted housing and in providing information about the race of persons denied federal housing benefits. Department of Education: We obtained and analyzed data on the number of students applying for federal postsecondary assistance for academic years 2001-2002, 2002-2003, and 2003-2004. In addition, we obtained data on the percentage of these applicants who were eligible to receive Pell Grants and of these, the percentage that received them, and we also obtained data on the percentage of applicants who received student loans. We used these data in developing estimates of the impact of the denial of federal postsecondary education assistance. In addition, we used published statistical reports from various agencies such as BJS; Uniform Crime Reports data on drug abuse violation arrests by state; Department of Health and Human Services reports on the characteristics of TANF recipients; USDA reports on food stamp recipients; and the United States Sentencing Commission’s 2003 Sourcebook of Federal Sentencing Statistics. We were asked to address the impacts of the federal benefit denial laws on racial minorities and the long-term impacts of denying federal benefits on individuals that were denied, their families, and their communities. Although very limited, the available information on these issues is summarized in appendices II through V. To determine the extent of data on the race of persons affected by the denial of federal benefit provisions, we asked the officials that we interviewed about their knowledge of data on the race of persons denied federal benefits. We also spoke with researchers and officials at various organizations about their knowledge of available data. To address data limitations of HUD data on persons denied federally assisted housing because of drug-related criminal activities, we requested, obtained, and analyzed data provided by 17 of the largest PHAs in the nation on the race of persons denied housing for reasons of drug-related criminal activity. To determine the current research and data on the potential economic and social impacts of the loss of federal benefits on individuals, families, and communities, we conducted literature searches to identify and review existing studies that have measured the impacts of the denial of federal benefits on drug offenders and families. We interviewed experts to understand how the incentives for drug treatment, as provided in the laws that deny benefits, are likely to affect drug addicts’ behavior, and we obtained their views regarding the effects that incarceration and drug convictions might have on a drug felon’s potential employment and earnings. We conducted our work from March 2004 to July 2005 in accordance with generally accepted government auditing standards. This appendix describes the legal and administrative framework for denying TANF and food stamp benefits to convicted drug felons and our methods for estimating the percentage of convicted drug offenders that would have been eligible to receive TANF and food stamps but for their drug felony convictions. The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996 provides that persons convicted of certain drug felony offenses are banned for life from receiving TANF and food stamp benefits. Specifically, Section 115 of PRWORA, as amended, provides that an individual convicted (under federal or state law) of any offense that is classified as a felony by the law of the jurisdiction involved and that has as an element the possession, use, or distribution of a controlled substance shall not be eligible to receive TANF assistance or food stamp benefits. The prohibition applies if the conviction is for conduct occurring after August 22, 1996. TANF assistance includes benefits designed to meet a need family’s ongoing, basic needs (for example, for food, clothing, shelter, utilities, household goods, and general incidental expenses) and includes cash payments, vouchers, and other forms of benefits. TANF assistance excludes short-term episodic benefits that are not intended to meet recurrent or ongoing needs and that do not extend beyond 4 months. The federal prohibition on TANF assistance to convicted drug felons does not apply to TANF “nonassistance” benefits, which include benefits meant to assist an individual’s nonrecurring emergency needs. TANF nonassistance can include drug treatment, job training, emergency Medicaid medical services, emergency disaster relief, prenatal care, and certain public health assistance. The Food Stamp Program provides benefits in the form of electronic benefit cards, which can be used like cash for food products at most grocery stores. Eligible households receive a monthly allotment of food stamps based on the Thrifty Food Plan, a low-cost model diet plan based upon National Academy of Sciences’ Recommended Dietary Allowances. For persons between the ages of 18 and 50 who are also viewed as fit to work and who are not the guardians of dependent children, PRWORA provides for a work requirement or a time limit for receiving food stamp benefits. The provision is known as the Able-Bodied Adults without Dependent (ABAWD) provision. ABAWD participants in the food stamp program are limited to 3 months of benefits in a 3-year period unless they meet certain criteria. PRWORA provides that states may enact a legislative exemption removing or limiting the classes of convicted drug felons that are otherwise affected by the federal ban on TANF and food stamps. State laws providing for exemptions need to have been enacted after August 22, 1996. The Office of the Administration for Children and Families (ACF) within the U.S. Department of Health and Human Services provides federal oversight of the TANF program. TANF is funded by both federal block grants and state funds, but states are responsible for determining benefit levels and categories of families that are eligible to receive benefits. State eligibility requirements establish earned income limits, and other rules, and these requirements may vary widely among the states. The U.S. Department of Agriculture’s Food and Nutrition Service (FNS) provides oversight for the Food Stamp Program, which is the primary federal food assistance program that provides support to needy households and to those making the transition from welfare to work. Eligibility for participation is based on the Office of Management and Budget federal poverty guidelines for households. Most households must meet gross and net income tests unless all members are receiving TANF or selected other forms of assistance. Gross income cannot exceed 130 percent of the federal poverty guideline (or about $1,313 per month for a family of two and $1,654 per month for a family of three in 2004), and net income cannot exceed 100 percent of the poverty guideline (or about $1,010 per month for a family of two and $1,272 per month for a family of three in 2004). “Gross income” means a household’s total, nonexcluded income before any deductions have been made. “Net income” means gross income minus allowable deductions. Allowable deductions include a 20 percent deduction from earned income, dependent child care deductions, and medical expenses, among others. According to officials at ACF and FNS, states may implement the provisions to deny convicted drug felons TANF and food stamps in a variety of ways. Some states administer the denial of benefits by requiring applicants to admit to disqualifying felony drug offense convictions at the time that they apply for benefits. Also according to agency officials, neither agency regularly collects and assesses data on the number of persons that self-certify disqualifying drug offenses. We reviewed documentation provided by USDA and for states that exempted some or all convicted drug felons from the federal ban on food stamps, we reviewed states’ laws pertaining to the exemption and we contacted officials to determine the status of their state’s exemptions to the federal bans on TANF and food stamps. Table 5 shows these statuses and for states that have enacted exemptions, provides citations to the state laws. There are several general types of modifications to the federal ban on TANF and food stamps among the states that have modified the ban. These modifications may include one or more of the following elements: (1) removing from the ban drug felons convicted for drug use or simple possession, but implementing the ban for drug sellers or traffickers (e.g., possession with intent to distribute offenses); (2) restoring benefits to drug felons complying with drug treatment program requirements; (3) restoring benefits so long as drug felons have negative drug test results over some period of time; and (4) restoring benefits to drug felons after various waiting periods, such as a number of years after conviction or release from prison. State modifications may also include other conditions. For example, Michigan allows convicted drug felons to receive benefits provided they do not violate the terms of their parole or probation and other conditions are met. Tables 6 and 7 show the types of modifications that states have adopted for the TANF and food stamp bans, respectively. These tables present general categories of different modifications, not an exhaustive listing of all specific requirements. For more detail consult the statutes listed in table 5. Estimating the Percentage of Drug Arrests within States That Implement, Modify, or Opt Out of the Bans on TANF and Food Stamps To obtain a general assessment of the degree to which state decisions to modify or opt out of the federal bans on TANF and food stamps exempt drug felons from the federal ban, we estimated the percentage of drug arrests that occurred within three groupings of states: (1) those that fully implement the bans, (2) those that have modified them, and (3) those that have completely opted out of the bans. We used drug arrests as a proxy for drug convictions, as state-level data on the number of drug felony convictions are not available. We analyzed data from the 2002 Crime in the United States: Uniform Crime Reports on the number of persons arrested for drug offenses in each of the 50 states. Table 8 reports the relative distributions of drug arrests for the states falling into each category for the TANF and food stamp bans. To assess the potential impacts of the bans on TANF and food stamps, we estimated the percentage of a population of drug felons released from prison that would be eligible to receive TANF, and but for their drug offense conviction could receive the benefit. By potentially affected, we refer to convicted drug felons that we estimated met selected eligibility criteria to participate in these benefit programs. According to our use of the term “impact,” only those drug felons who were otherwise eligible to receive benefits actually stood to lose benefits as a result of the bans, and could therefore be affected by the bans. To determine the percentage of drug felons that met selected eligibility criteria, we used data from the Bureau of Justice Statistics’ Survey of Inmates of State Correctional Facilities in 1997. This survey is based upon a nationally representative sample of persons in state prisons during July 1997. The 1997 data represent the most recently available data from this recurrent survey, which BJS conducts about every 5 years. We used information from the survey about prisoners’ parental status, employment, and income prior to incarceration in developing our estimates of the percentages of drug offenders that were custodial parents and had incomes within allowable maximums to qualify for the benefits. For both benefits, we provide estimates that are based on drug offenders released from prison during 2001 in the subset of states that fully implemented either the ban on TANF or food stamps. To the extent possible, we limited the data on drug offenders released from prison to those who entered prison during 1997 or thereafter. This allowed for a period of time between the possible date that a drug felony offense was committed and the date that an offender entered prison, and in this way, we took into account the implementation date of the ban, which was August 22, 1996. Because of data limitations, we did not attempt to develop estimates for states that modified the bans. For example, some states’ exemptions to the bans allow that convicted drug offenders may receive benefits (provided that they are eligible for them) if they do not fail a drug test, if they undergo required drug treatment, if they do not violate conditions of probation or parole supervision, or if they meet certain other conditions. The data that we used did not include this information; therefore, we could not estimate the potential impacts of the bans in the states that modified the bans. We developed estimates of the potential impact of these bans on the population of released prisoners for 1 year, 2001, the most recent year for which we obtained data. We did not attempt to develop estimates for all persons potentially affected by the bans since they went into effect during 1996. We discuss the problems associated with estimating all persons potentially affected by the bans in a later section of this appendix. Data and Methods Used to Estimate the Potential Impacts of the TANF Ban To estimate the potential impacts of the TANF ban, we obtained data from states on drug felons released from prison, and using these data, we applied estimates of the percentages that met selected TANF eligibility requirements. These methods are described more fully below. For 14 of the 18 states that fully implement the ban on TANF, we obtained data on the number of drug offenders released from prison during 2001. We used two sources of data: (1) the Bureau of Justice Statistics’ National Corrections Reporting Program (NCRP) and (2) data from selected other states. From NCRP, we obtained counts of the number of drug felons released from prison during 2001, given that they were committed into prison in 1997 or thereafter for a new conviction that contained a drug offense. We chose 1997 because the TANF ban went into effect on August 22, 1996, and data on the date that ex-prisoners committed their drug offense—which is the factor that determines whether they are under the ban—were not available in the data that we used. From the other states, we obtained comparable data on the number of drug offenders released from prison. The 14 states for which we obtained data were Alabama, Arizona, California, Georgia, Kansas, Mississippi, Missouri, Nebraska, North Dakota, South Carolina, South Dakota, Texas, Virginia, and West Virginia. The 14 states account for approximately 97 percent of the population in the 18 states that maintain the ban on TANF for drug felons. For the 4 states that were excluded from our analysis—-Alaska, Delaware, Montana, and Wyoming—we were unable to obtain data on released prisoners. We also excluded from our analysis states that may have implemented the ban in 2001 but as of January 2005 had modified or opted out of the ban. Across the 14 states, about 96,000 drug offenders were released from prison during 2001, given that they had been admitted during 1997 or thereafter. This population of all drug felons released from prison includes those who were sentenced to prison following their conviction for a drug offense, and it also includes offenders who entered prison because they had violated conditions of supervision. Among offenders who entered prison for a violation of conditions of supervision, some may have committed their offenses before the TANF ban went into effect, and they would not be subject to the ban. However, some of the released prisoners who had violated conditions of supervision may have been convicted after the ban went into effect, but the available information reported only the date of admission for the violation and not for the original sentence. These offenders should be included among the population of drug felons that are subject to the ban. Hence, the population of all released prisoners might over estimate the number of drug offenders in these 14 states who committed offenses after the TANF ban had gone into effect. About 51,000 of the drug offenders released from prison during 2001 were those who had been admitted into prison during 1997 or immediately after their conviction. While this population of released drug offenders includes those whose prison sentence occurred after the ban went into effect, this number may under estimate the number of drug felons in these states who were subject to the ban. It may do so because it will exclude the parole violators who had initially been committed after 1997 but whose most recent commitment was for a violation of parole that also occurred after 1997. About 87 percent of all drug offenders released from prison during 2001 in the 14 states for which we obtained data were males, as were about 86 percent of the first releases. Females constituted 13 percent of all releases and 14 percent of first releases (table 9). To receive TANF assistance, an assistance unit (such as a household) must meet the state-mandated definition of a needy family: It must either contain at least one child living with an adult relative or consist of a pregnant woman. The adult guardian must be related to the child by blood, adoption, or marriage (or, if the state provides, the adult may stand in for parents if none exist). Further, TANF recipients must in general be either U.S. citizens or qualified aliens who entered the United States prior to the passage of PRWORA on August 22, 1996, or who have lived in the United States for a period of 5 years. States may also impose other conditions for receipt of TANF benefits. We used data from the 1997 version of the BJS Inmate Survey to estimate the percentage of drug offenders who were custodial parents and who had monthly incomes within state-determined earned income limits. For estimation purposes, we defined a drug offender in the inmate survey as a custodial parent if the offender met three conditions: (1) reported being the parent of at least one minor child, (2) reported living with the child prior to being incarcerated, and (3) reported that the child was not in foster care or agency care while the offender was in prison. We computed the number of prisoners who met these conditions, and from these counts, we estimated the percentages of drug offenders that met these conditions. As the data were drawn from a sample, we used weighting factors provided by BJS that were based on the original probabilities of being selected into the sample that were adjusted for nonresponse and information about the sex, race, age, prison security level, and type of offense of the total prison population to produce national-level estimates. We estimated the percentages separately by gender and region of the country. Table 10 shows our estimates of the percentage of convicted drug felons that were reported to be parents and custodial parents (based on our definition) of minor children. We also estimated the income distributions for drug offender parents who reported living with their children. In the BJS survey, income is reported as the offender’s total income in the month prior to the arrest leading to the incarceration. Monthly income can be from any source and may include illegal income. We omitted from our analysis those offenders who reported income from illegal sources, and we included only offenders who reported earned income or who were unemployed prior to their imprisonment. Offenders who were unemployed prior to their imprisonment received a value of zero for earned income. We estimated the income distributions separately by gender and region to account for differences in employment and earnings between male and female offenders, and offenders in different states. We applied the regional income distributions to all states within a region, as the BJS data did not report the state in which the offender was incarcerated. From the income distributions, we estimated the gender-specific percentages of drug offenders who had incomes at or below state- determined earned income limits. The BJS inmate survey data report income in intervals, and in many cases, the intervals do not correspond directly with the state earned income limits. Therefore, we selected income intervals that were as near to the state earned income limits as feasible. We generally selected two income intervals for each state: one that contained the state earned income limit level but whose lower bound was less than the state level, and one that contained the state earned income limit but whose upper bound was above the state level. In this way, we obtained upper- and lower-bound estimates of the potential impacts of the TANF ban. To obtain estimates of the percentage of drug offenders released from prison who were both custodial parents and were income eligible for TANF, as defined above, we applied the gender-specific estimates of the percentage of prisoners in each region of the country that met the specific TANF eligibility criteria to state-specific counts of the number of drug felons released from prison. We used the region of the country within which a state was located to obtain estimates for a specific state. The result of these operations was to obtain estimates of the percentage of drug offenders released from prison who were both custodial parents and were income eligible for TANF, as defined above. The estimated percentages of drug offenders released from prison that met these conditions are shown in table 11. We were unable to take into account all of the factors that determine whether drug offenders met the eligibility criteria to receive TANF. Some of these factors could contribute to reducing the estimated percentages of drug offenders who were otherwise eligible; others could possibly contribute to increasing the estimated percentages. In addition, our estimates for drug offenders released from prison in a given year do not apply to drug felons who were sentenced to probation. Finally, we are unable to provide an estimate of the percentage of drug offenders potentially affected by the ban for the entire period since it was implemented. Data limitations preclude our explicitly taking into account all of the factors that are related to TANF eligibility. Factors affecting TANF eligibility for which we do not have data are the citizenship status and length of residency of noncitizens, state-imposed work requirements to receive TANF, and individual choices to participate in the program. While we were unable to estimate the effect of these factors on our estimated percentages that might have been eligible to receive TANF, these factors would contribute to lowering our estimates of the percentage of drug offenders released from prison that might have been eligible to receive TANF. Several of the states whose data we analyzed have relatively large populations of noncitizens. In general, to qualify for TANF, aliens must have at least 5 years of residence in the United States since August 22, 1996. Given that our estimates are for 2001, it is unlikely that many aliens among convicted drug felons would have qualified for TANF. Hence, taking the alien qualification into account would lower our estimates of the percentage of drug felons potentially affected by the TANF ban. For 2003, ACF reports that 8 percent of adult TANF recipients were qualified aliens. Individuals within needy families who do not participate in state- determined work requirements could lose their TANF eligibility. Failing to comply with work requirements would reduce the percentage of drug offenders that were otherwise eligible to receive TANF. In the general population, adult males constitute comparatively small numbers of TANF recipients. According to ACF, in 2001 adult males constituted about 9 percent of all adult TANF recipients. If we applied the general population adult male TANF recipiency rate to our estimates of the percentage of all drug offenders released from prison, our estimated impact of the TANF ban would be revised downward to about 4 percent of all of drug offenders released from prison in 2001. One factor that could change the estimated percentage of convicted drug felons eligible to receive TANF benefits and therefore potentially affected by the ban is a change in a felon’s eligibility to receive TANF. Our estimates of the percentage of prisoners that may be eligible to receive TANF are based on attributes existing at the time that offenders were in prison. Upon release, these attributes may change, and an offender might become otherwise eligible for TANF and therefore potentially be affected by the ban. For example, if a drug offender was reunited with his or her children after release and met other eligibility requirements, this would contribute to increasing the percentage of released prisoners that were eligible to receive TANF. Alternatively, imprisonment may be a factor that reduces contact with children and therefore contributes to decreasing the percentage of drug offenders released from prison that are eligible to receive TANF. In recent years, drug felons sentenced to probation account for about one- third of all convicted and sentenced drug felons. We did not apply the information about drug offenders in prison to the drug felons sentenced to probation. This is because we do not have data on the parental and income characteristics of drug felons sentenced to probation. To the extent the drug felons sentenced to probation have characteristics similar to those of drug felons released from prison, the estimated percentage of probationers that may be eligible to receive benefits would be similar to those estimated percentages among released prisoners. However, if income levels and other factors differ between probationers and prisoners, this could affect the estimates of the percentages that would be eligible to receive benefits. We do not provide an estimate of all drug offenders potentially affected by the ban on TANF since it went into effect. We were unable to obtain data on the number of persons convicted of drug felonies since the ban went into effect in 1996, as only limited data are available. Over time, an individual’s attributes that are related to TANF eligibility may change. Convicted drug felons who did not have characteristics that would make them eligible to receive TANF at one point in time could develop these attributes at a later point in time. Conversely, the circumstances of convicted drug felons who at one point in time were otherwise eligible to receive TANF could change so that they are no longer otherwise eligible. To understand the long-term impacts of the ban therefore would require data that track individuals over time and measure changes in their characteristics that are related to TANF eligibility. We know of no such national data on drug offenders. Our estimates of the percentage of drug offenders released from prison in a given year who are potentially affected by the ban represent lower- bound estimates of the proportion of drug offenders released from prison during that year that would ever be affected by the ban. If, among those released from prison and estimated not to be eligible to receive TANF, any persons became eligible at a later date, this would increase the percentage of persons potentially affected by the ban. Consequently, the long-term impacts of the ban would be greater than the impacts that we estimated for the 1-year release cohort. Similarly, if the 1-year estimates of the percentage potentially affected by the ban were to hold over time, then a larger percentage of all convicted drug felons would be potentially affected by the ban since its inception than the percentages that we estimated for 1 year. Data and Methods Used to Estimate the Potential Impact of the Ban on Food Stamps We focused our analysis of the potential impact of the ban on food stamps on drug offenders that were reported to be custodial parents of minor children. According to USDA, in fiscal year 2003, adult households with children (containing either one or two adults) constituted 73 percent of food stamp recipients. Consequently, this is likely to be the largest group of drug offenders that could be affected by the food stamp ban. We were unable to develop a quantitative estimate of the percentage of able-bodied adults without dependents (ABAWD) that could be affected by the food stamp ban. ABAWDs, in general, may receive food stamps for 3 months within a given 3-year period or longer if they adhere to the work requirements specifically laid out for ABAWDs. However, we were unable to determine which drug offenders constituted the potential pool of ABAWDs. We further gave the potential ABAWDs recipients separate consideration because, according to USDA reports, in 2003, they constitute 2.5 percent of food stamp recipients nationwide even though they form a large share of the general population of such persons. We also did not attempt to develop an estimate of the impact of the ban for elderly and disabled drug offenders. For 2003, USDA reported that adult households with children (containing either one or two adults) constitute 73 percent of food stamp recipients. In contrast, elderly individuals living alone constitute 6 percent of food stamp recipients, and disabled nonelderly individuals living alone constitute 5 percent of food stamp recipients. Single-adult households—which according to USDA do not contain children, elderly individuals, or disabled individuals—constitute 6 percent of food stamp recipients. Therefore, adult households with children receive food stamps at a rate greater than 12 times the rate at which single-adult households receive food stamps. The percentage of single adult households receiving food stamps is higher than the percentage of ABAWDs receiving food stamps because an individual is not considered an ABAWD if the person is pregnant, exempt from work registration, or over 50 years of age. For 12 of 15 states that maintain the full ban on food stamps, we obtained data on the drug felons released from prison during 2001 (given that they entered prison during 1997 or thereafter). The 12 states are Alabama, Arizona, Georgia, Kansas, Mississippi, Missouri, North Dakota, South Carolina, South Dakota, Texas, Virginia, and West Virginia. The 3 excluded states for which we were unable to obtain data were Alaska, Montana, and Wyoming. A total of 67,000 drug offenders were released in 2001 in the 12 states, and of these, 30,000 were first releases from new court commitments. We used the BJS inmate survey data to estimate the percentage of drug felony prisoners who were parents living with their minor children and whose children were not in foster care while they were incarcerated. This was our operational definition for a custodial parent. For these, we estimated the percentage who had gross incomes within the poverty thresholds, based on estimates of family size. Food stamp eligibility is based on gross and net income tests. Because data on the deductions that are used in determining whether households meet the net income tests were not available, our estimates are at best gross income tests. We are unable to determine how our use of the gross income test alone affects our estimates of the percentage of drug felons released from prison that would have been eligible to receive food stamps. In general, ABAWDs may receive food stamp benefits for an extended duration as long as they meet ABAWD-specific work requirements. This means that a large percentage of drug felons could be eligible to receive, and therefore potentially be denied, food stamps as long as they fell within the income threshold to receive food stamps. However, among all food stamp recipients, ABAWDs constitute only 2.5 percent of the total. Hence, while we cannot estimate the percentage of ABAWDs within the drug offender pool that would be otherwise eligible to receive food stamps, the ABAWD participation rate in food stamps in general would suggest that relatively few drug offenders who fall into this category would participate in the program. We assessed the impacts of the denial of TANF and food stamp benefits by estimating the percentage of convicted drug felons released from prison who were otherwise eligible to receive the benefits. To assess whether impacts vary by race, we first assessed whether the percentage of drug offenders who met the same eligibility requirements that we used to assess the overall impacts of the TANF and food stamp bans varied according to race. For example, if larger proportions of black than white drug offenders were custodial parents of minor children and had earned income that permitted them to qualify for TANF, then we would expect to find larger percentages of black drug offenders to be affected by the TANF ban, regardless of the racial composition of the group of all drug offenders released from prison. We used the BJS inmate survey data to compare the estimated percentages of black and white drug offenders who were custodial parents (as we defined the term previously) and had earned incomes that could qualify them to receive TANF. As before, we estimated these percentages by gender and region. Our estimates indicated that in one region (the South), the percentage of black female drug offenders who were otherwise eligible to receive TANF differed from the percentage of otherwise eligible white female drug offenders. A larger percentage of black female drug offenders in that region were estimated to be eligible to receive TANF than white female drug offenders in the region. Among male drug offenders, we estimated differences in eligibility for TANF in two regions. For both female and male drug offenders, the differences in estimated TANF eligibility arose from differences in incomes, as there were no differences in the percentage of black and white drug offenders that were estimated to be custodial parents. This appendix describes the legal framework for denying federal higher education benefits to drug offenders, how the federal provision is administered, our methods for estimating the number of students affected by the federal provisions, and the impacts of the federal provision. The Higher Education Act of 1965, as amended, provides for the suspension of certain federal higher education benefits to students who have been convicted for the possession or sale of a controlled substance under federal or state law. The controlled substance offense may be either a felony or a misdemeanor. Federal higher education benefits that are denied to such individuals include student loans, Pell Grants, Supplemental Educational Opportunity Grants, and the Federal Work- Study program. The Higher Education Act provision outlines different periods for which such drug offenders are ineligible to receive certain federal higher education benefits, depending upon the type and number of controlled substance convictions. The period of ineligibility begins on the date of conviction and ends after a specified interval. Table 12 illustrates the period of ineligibility for the federal higher education benefits, according to the type and number of convictions. This Higher Education Act provision allows for eligibility for federal higher education to be restored prior to the end of the period of ineligibility if either one of two conditions is met. First, a student satisfactorily completes a drug rehabilitation program that includes two unannounced drug tests and complies with criteria established by the Secretary of Education. Second, a student has his or her drug conviction reversed, set aside, or nullified. The provisions of federal law mandating the denial of certain federal higher education benefits were implemented beginning in July 2000 by requiring students who applied for federal assistance to self-report disqualifying drug convictions. Students must self-report disqualifying drug convictions through the Department of Education’s Free Application for Federal Student Aid, a form that any student who wishes to receive federal student aid must complete. The FAFSA is available online and is free to use. ED uses the information that applicants provide on their FAFSA to determine their eligibility for aid from the Federal Student Aid (FSA) programs. Colleges and universities in 49 states also use information from the FAFSA in making their financial aid determinations. ED provides participating colleges and universities with a formula to use when making decisions about financial assistance. Applicants who either report that they have a drug conviction that affects their eligibility or those applicants who do not answer the question about drug convictions are automatically ineligible to receive federal higher education assistance in the academic year for which they sought aid. (Below, we refer to this group as FAFSA ineligibles.) The drug conviction worksheet of the FAFSA also notifies students that even though a drug conviction may render them ineligible to receive federal higher education assistance in the application year, individuals may still be eligible to receive aid from their state or their academic institution. For several reasons, not all of the FAFSA applicants who self-report a disqualifying drug conviction would otherwise have been eligible to receive federal assistance; hence, the number of applications containing self-reported disqualifying drug offenses overstates the number of persons denied federal postsecondary education assistance because of a drug offense conviction. First, not all FAFSA applicants are eligible to receive all types of federal postsecondary education assistance. For example, some applicants may have incomes above the levels required to receive Pell Grants, and even if they self-reported a disqualifying drug conviction, they would not have been eligible to receive Pell Grants. Second, ED officials indicated that not all FAFSA applicants become enrolled in postsecondary education institutions, and these applicants are not eligible to receive federal postsecondary education assistance. Third, some individuals may complete the FAFSA application more than one time, and by counting only the number of applications, some individuals may be double-counted. To assess the impacts of the Higher Education Act’s provisions that render students with disqualifying controlled substances convictions ineligible to receive federal postsecondary education assistance, we estimated the number of students who self-reported a disqualifying drug offense and, absent the controlled substances convictions provisions of the Higher Education Act, would have been qualified to receive assistance but because of the provisions would not have received assistance. We developed estimates of the number of applicants for Pell Grants and subsidized and unsubsidized Stafford loans (two of the best-funded federal postsecondary education assistance programs) and the total amounts of assistance lost, because of their self-reported controlled substances convictions. Our methods for estimating these quantities are as follows: To estimate the number of students who were denied Pell Grants in a given year, we use ED data on the number of FAFSA applicants that either self-reported a disqualifying drug offense conviction or left this question blank, the group that we labeled as FAFSA ineligibles. As applicants must meet needs-based criteria to make them eligible to receive Pell Grants, we then use ED data on the percentage of FAFSA applicants that were eligible to receive Pell Grants; we call this second group Pell Grant eligibles. We use ED data on the percentage of Pell Grant eligibles that actually received Pell Grants, as not all of the students who were eligible to receive Pell Grants received them. By multiplying these quantities, we obtained a rough estimate of the number of persons who, absent the disqualifying drug offense conviction, would have received Pell Grants. To estimate the dollar amount of Pell Grants that these recipients would have received, we multiplied the average amount of Pell Grants (which we obtained from ED) by the estimated number of students denied Pell Grants. To estimate the number of student loan recipients who were denied assistance because of disqualifying drug convictions, we followed a method similar to the one that we used to estimate the numbers denied Pell Grants. Specifically, beginning with the data on FAFSA ineligibles, we applied to this number the percentage of all FAFSA applicants that received a student loan. We could not obtain an estimate of the number of FAFSA applicants that were eligible to receive student loans because, as ED reports, unlike Pell Grants, where there are income limitations that can be used to determine eligibility, with student loans, eligibility is determined by both income and institution-specific factors (such as tuition). Thus, our estimate is of the number of FAFSA ineligibles that would have received a student loan but for their controlled substances convictions. To estimate the amount of student loans denied, we multiplied our estimate of the number denied student loans by the average amount of a student loan. In order to create our estimates for the number of individuals who would have received a Pell Grant or a student loan if not for their drug conviction, we assume that the characteristics of FAFSA eligibles are the same as the characteristics of FAFSA ineligibles. This assumption means that the percentage of FAFSA applicants who are eligible to receive federal higher education assistance should be the same for FAFSA ineligibles (apart from the drug conviction). Income is an important determinant of eligibility for both Pell Grants and student loans. Specifically, financial need is determined by ED using a standard formula established by Congress to evaluate the applicant’s FAFSA and to determine the student’s Expected Family Contribution (EFC). The EFC calculation includes various data elements including income, number of dependents, net assets, marital status, and other specified additional expenses incurred. Different assessment rates are used for dependent students, independent students without dependents, and independent students with dependents. After filing the FAFSA, a student is notified if he or she is eligible for a federal Pell Grant and of the student’s EFC. On the one hand, if FAFSA ineligibles on average have lower incomes than FAFSA eligibles, then our estimates of the number of students denied benefits are likely to be underestimates of the true number denied benefits. This is because we rely on the information about eligibility for Pell Grants and student loans from the persons who were eligible to receive them, not from the population who are otherwise eligible but for their disqualifying drug convictions. On the other hand, if FAFSA ineligibles are less likely to be enrolled in postsecondary education institutions, as compared with FAFSA eligibles, then our estimates of the number denied benefits are likely to overestimate the true number denied benefits. Table 13 shows the data that we used to estimate the numbers and amounts of federal postsecondary education assistance that was forgone to students who, absent their controlled substances convictions, would have received federal postsecondary education assistance. The data are provided annually for academic years 2001-2002 through 2003-2004. The key data elements used to estimate the numbers and amounts of federal assistance denied include the number of FAFSA applicants and FAFSA ineligibles, the percentage of Pell Grant eligibles among all FAFSA applicants, the percentage of Pell Grant recipients among Pell Grant eligibles, the average amount of Pell Grant received, the percentage of FAFSA applicants that received student loans, and the average amount of student loan received. The number of FAFSA ineligibles declined from 58,929 in academic year 2001-2002 to 41,061 in academic year 2003-2004. We note that FAFSA ineligibles amount to less than 0.5 percent of all FAFSA applications. In the academic years from 2001-2002 through 2003-2004, we estimated that between 17,000 and 23,000 students were denied Pell Grants because of their drug convictions, and that the total estimated amount of Pell Grants that these students would have received ranged from $41 million to $54 million. See table 14. We provide annual estimates of the numbers affected because the period of benefit ineligibility can vary, and a student denied benefits in one year may become eligible to receive benefits in a subsequent year. Thus, the estimates for one year do not necessarily affect the estimates for another year. In academic year 2001-2002, there were 58,929 FAFSA ineligibles. During that same year 51.5 percent of FAFSA applicants were eligible to receive Pell Grants, and 76.9 percent of those who were eligible received Pell Grants (as shown in table 13). Multiplying the 58,929 by the 51.5 percent and then multiplying this result by the 76.9 percent results in the estimate of 23,000 individuals denied Pell Grants who otherwise would have received them. To obtain the amount of Pell Grant lost to these students during academic year 2001-2002, we multiplied our estimated number of students denied Pell Grants (23,000) by the average amount of a Pell Grant in academic year 2001-2002 ($2,298). Table 14 also shows that between academic year 2001-2002 and academic year 2003-2004, an estimated 29,000 to 41,000 students per year would have received student loans if not for their drug convictions. The estimated total amount of student loans forgone by these students ranged between $100 million and $164 million per year. The President’s fiscal year 2005 budget contained a proposal that would have changed the administration of the Higher Education Act provision relating to eligibility for federal higher education benefits. Federal law disqualifies students who have been convicted of controlled substance offenses, in accordance with the period of ineligibility in table 12, from receiving federal higher education assistance. As currently implemented by the Department of Education, disqualifying convictions are those drug convictions on a student’s record at the time the student’s eligibility is being determined, using the rules on the FAFSA worksheet. Under the President’s proposal—which was supported by the Office of National Drug Control and Prevention—students would be ineligible for federal higher education assistance only if they committed a disqualifying drug-related offense while they are enrolled in higher education. This proposed change would make eligible all students whose controlled substance convictions occurred prior to enrolling in higher education. Because of data limitations, we are unable to provide reliable estimates of the impacts of the proposed changes contained in the President’s fiscal year 2005 budget proposal. However, we expect that the proposal would lower our estimates for the numbers of students denied benefits because some individuals would regain their eligibility for benefits, and relatively few students enrolled in postsecondary education institutions would be expected to both use drugs and be convicted of drug crimes. While studies consistently show that the economic returns to higher education are positive, we cannot establish a direct link between the denial of federal postsecondary aid to students and a reduction in the amount of postsecondary education completed by those who were denied aid. Moreover, officials at ONDCP also suggested that the provisions of the Higher Education Act that provide for denying educational aid to drug offenders might contribute a deterrent effect on drug use. Similarly, we were unable to identify studies that assess whether provisions of the HEA actually helped to deter drug use. Additionally, we are unable to address the question of whether these provisions of the HEA that deny higher education benefits to drug offenders result in net positive or negative effects on society because we were unable to find research that conclusively indicates whether these provisions of the HEA led individuals to forgo postsecondary education or deterred individuals from engaging in drug use and drug-related criminal activities. Additional formal education—e.g., completing high school or attending or completing postsecondary education—has been demonstrated to increase annual and lifetime earnings. In its review of the returns to education, the U.S. Census Bureau concluded that increases in formal education had a positive impact on annual earnings. For example, the U.S. Census Bureau reported that for full-time workers between the ages of 25 and 64 between 1997 and 1999 the average annual income for those who have not completed high school is $23,400, for high school graduates it is $30,400, and for those completing a bachelor’s degree it is $52,200. Average annual income rises higher yet for those who obtain advanced degrees. This general pattern, that increases in formal education correlate with increases in annual earnings, also holds true across an individual’s lifetime. The U.S. Census Bureau reported that the average lifetime earnings, based upon 1997-1999 work experience, for those who have not completed high school is approximately $1 million, for high school graduates it is $1.2 million, and for those completing a bachelor’s degree it is $2.1 million. Again, the average lifetime earnings rise higher yet for those who obtain advanced degrees. Hence, college graduates can expect, on average, to earn nearly twice as much over a lifetime as those persons who have only a high school diploma and more than twice as much as those who have not completed high school. Similarly, a study published by the congressional Joint Economic Committee in January 2000 concluded that there is a strong consensus among economists that formal education has a positive impact not only on personal income but also on society. The study concluded that among the positive societal economic returns from increases in formal education are the creation of new knowledge (translating into the development of new processes and technologies) and the diffusion and transmission of knowledge (translating into the expansion of innovative techniques such as those found in the high-technology sector). Positive societal noneconomic improvements are also associated with increased amounts of formal education, which help Americans become better mothers, fathers, children, voters, and citizens. These positive noneconomic improvements are sometimes called positive neighborhood effects. Some of the positive neighborhood effects may be (1) more informed and interested voters, (2) decreases in crime, (3) decreased dependence upon certain types of public assistance, and (4) decreased incidence of illegitimate pregnancies. Although the census study and the study conducted by the Joint Economic Committee show positive economic and societal impacts of increased levels of education, the total net impacts of these benefits are difficult to quantify. Moreover, these studies do not comment on whether the loss of federal education assistance (as occurs for drug offenders through the provisions of the HEA) contributes to individuals’ not completing postsecondary education, or whether those individuals who are denied federal education assistance generate the necessary funding to attend institutions of higher education in other ways. Also at issue is whether the provisions of the HEA that deny postsecondary education benefits to drug offenders contribute positively to society by providing a deterrent to drug use. Research on the costs to society from drug use, and drug-associated criminal involvement, demonstrated that these costs to society are high. Therefore, if the denial of federal higher education benefits deters people from engaging in drug crimes, then the provisions might have positive economic and noneconomic impacts on society. Some of the positive affects of deterrence may include reductions in drug-related health care costs, reductions in drug-related crime and associated criminal justice costs, and increased national economic productivity. In addition, for many offenders and in particular for first-time drug offenders, the denial of postsecondary education benefits may delay entry into postsecondary education rather than prevent it. With the available data, we were unable to determine whether the provisions of the Higher Education Act that provide for denial of postsecondary education benefits would affect relatively larger or smaller numbers of minorities. The FAFSA does not request information about applicants’ race; therefore ED does not have data on the racial distribution of applicants or FAFSA ineligibles. Without data on the race of applicants for federal student aid, it is not possible to determine whether minorities are denied aid at higher rates than whites. The Bureau of Justice Statistics’ Survey of Adults on Probation in 1995, which is the only national survey of probationers that includes data on the type of offense of conviction and educational attainment, indicates that there may be racial differences in the levels of educational attainment of drug offenders. The survey indicates that black and Hispanic drug offenders on probation complete high school at a lower rate than white drug offenders on probation. Specifically, while 68 percent of white drug offenders on probation had completed high school, 51 percent of black and 46 percent of Hispanic drug offenders on probation had completed high school. As completing high school (or gaining a general equivalency degree) is a prerequisite for enrollment in postsecondary education, these data suggest that lower proportions of black and Hispanic drug offenders (at least drug offenders on probation) would be eligible to enroll in postsecondary educational institutions and would therefore be eligible for federal higher education assistance. This appendix provides background on the legal and administrative frameworks for denying federally assisted housing benefits to persons who engage in drug-related criminal activities, our methods for estimating the numbers of persons denied benefits, and how we assessed the available data on racial minorities and the limited information on potential impacts. Federal law contains a variety of provisions relating to the denial of federally assisted housing benefits for certain types of drug-related criminal activity. These provisions relate to, among other things, (1) who may lose eligibility for federally assisted housing benefits because of drug- related criminal activity and (2) screening tools for the providers of federally assisted housing to use to determine ineligibility for such housing benefits. Motivation for prohibiting drug offenders from public housing is reflected, in part, in congressional findings made in 1990 and amended in 1998, about drug-related criminal activities in public housing; these findings stated, in part, that (1) “drug dealers are increasingly imposing a reign of terror on public and other federally assisted low-income housing tenants,” (2) “the increase in drug-related crime not only leads to murders, muggings, and other forms of violence against tenants, but also to a deterioration of the physical environment,” and (3) “the Federal government has a duty to provide public and other federally assisted low- income housing that is decent, safe, and free from illegal drugs.” Public housing agencies, which are typically local agencies created under state law that, under Department of Housing and Urban Development guidance, manage and develop public housing units for low-income families, are required, for example, to utilize leases that provide that any drug-related criminal activity on or off the premises by a public housing tenant shall be cause for termination of the tenancy. This provision also specifically applies to drug-related criminal activity by any member of the tenant’s household or any guest or other person under the tenant’s control. Similarly, federal law requires PHAs and owners of federally assisted housing to establish standards or lease provisions that allow for the termination of the tenancy or assistance for any household with a member who the PHA or owner determines is illegally using a controlled substance. Federal law further specifies that tenants evicted from federally assisted housing by reason of drug-related criminal activity are to be ineligible for federally assisted housing for a 3-year period, although evicted tenants that successfully complete an approved rehabilitation program may regain their eligibility before the 3-year period ends. Under federal law and implementing regulations, PHAs have the discretion to evict tenants for drug-related criminal activity but are not required to evict such tenants. Rather, they are required to use leases that provide that any drug-related criminal activity on or off the premises by a public housing tenant shall be cause for termination of the tenancy. Implementing regulations by the U.S. Department of Housing and Urban Development relating to termination provide that a determination of such criminal activity may be made regardless of whether a person has been arrested or convicted of such activity and without satisfying a criminal conviction standard of proof of the activity. With respect to methamphetamine convictions, PHAs are required under federal law to establish standards to immediately and permanently terminate a tenancy as well as permanently prohibit occupancy in public housing for persons convicted of certain methamphetamine offenses occurring on public housing premises. PHAs do not have discretion in evicting these persons, and the standards also require that Housing Choice Voucher Program (formerly Section 8 low-income housing) participation be denied to such persons. Federal law also provides various screening tools to assist with determining possible ineligibility of tenants and applicants for federally assisted housing benefits because of drug-related criminal activity. These tools come primarily in the form of access to certain types of information. For example, under federal law, housing assistance agencies are authorized to request access to criminal conviction records from police departments and other law enforcement agencies for the purposes of applicant screening, lease enforcement, and eviction. PHAs have the authority under certain conditions to request access to such information with respect to tenants and applicants for the Housing Choice Voucher Program. Public housing authorities are also authorized under federal law to require that applicants provide written consent for the public housing authorities to obtain certain types of records, such as criminal conviction records and drug abuse treatment facility records. HUD is responsible for establishing the rules and providing guidance to PHAs in their administration of federally assisted housing benefits. PHAs can manage a single program or multiple HUD programs. HUD’s Office of Public and Indian Housing oversees the two key rental housing assistance programs that we reviewed, namely the Low-Rent Public Housing Assistance Program (also referred to as low-rent, or public housing) and the HCV Program. During the 1990s, PHAs gained broader latitude from HUD and Congress to establish their own policies in areas such as selecting tenants. This included increased latitude in taking actions to deny federally assisted housing benefits to persons receiving housing benefits and to applicants for benefits. HUD requires PHAs to submit for its review and approval annual plans that include, among other things, their policies for continuing occupancy and denying admission for drug-related criminal activities. Recent HUD guidance regarding denying federal housing benefits to persons engaged in drug-related criminal activities was issued in its “Final Rule,” dated May 2001. The rule amended existing regulations regarding implementing the federally assisted housing tenant eviction and applicant screening provisions for drug-related criminal activities. Termination and admission policies can vary substantially among PHAs nationwide. In a baseline study (November 2000) of a stratified random sample of the PHAs that were responsible for managing federally assisted housing units in the HCV Program, HUD reviewed the discretionary authority among PHAs. HUD reported that the variation among PHAs in conducting criminal background checks could legitimately result in an applicant being barred by one PHA even though the applicant could otherwise be admitted by another PHA. Some of the variations reported in the study include differences in (1) the sources used to obtain information about criminal history and drug-related criminal activities (e.g., newspaper stories, resident complaints, self-disclosure, official law enforcement records—federal, state, local); (2) the costs (paid by the PHA) associated with obtaining official law enforcement criminal background records; (3) the time span covered by the criminal history search; and (4) whether consideration is given to repeat offenses, only convictions, or arrests and convictions. We obtained and reviewed policies from seven of the largest PHAs having combined programs—Public Housing and HCV. Our review of their policies with respect to terminations and admissions for drug-related criminal activities showed variations in the policies established to deny housing benefits. For example, policies regarding terminations of leases (for public housing tenants) or termination of assistance (for HCV recipients) vary in how they implement the drug-related criminal activity provisions and in the scope of criminal background that can result in terminations: Drug-related criminal activity provisions can range from certain types of prohibited behaviors (e.g., those that threaten the health and safety of other residents) to certain drug convictions (e.g., drug-related criminal activity, and methamphetamine in particular). Scope of criminal background can vary by the period of prior criminal history that can trigger termination of leases or assistance, the type of prohibited drug-related criminal activities (e.g., personal use, felonious distribution, etc.), or whether there was a conviction in the case. Analogously, PHA policies on admissions into public housing or into HCV can vary based on a number of factors, and these variations in policies can result in differences among PHAs in the types of drug offenders that are denied federally assisted housing. Applicant screenings for drug-related criminal activity can occur in varying forms (such as an application, interview, or eligibility verification) and at varying times—such as before or after placement on the PHAs’ waiting lists. Sources of criminal history information used can vary, so some PHAs cast a wider net than others when searching for prohibited drug-related criminal activity. Sources can range from using only local law enforcement records to using Federal Bureau of Investigation/National Crime Information Center data. Periods of ineligibility for prior evictions from federally assisted housing can vary by time frame and criminal activity (e.g., drug-related or violent). Ineligibility periods ranged from 3 to 5 years. The evidence standard for drug addiction can vary to include a reasonable cause to believe illegal drug use exists or self-disclosure of illegal use on the application itself. We obtained data from HUD on the number of evictions from and applicants denied admission into public housing during fiscal years 2002 and 2003 for reasons of criminal activities. In each year, more than 98 percent of the PHAs that manage public housing responded to HUD’s request for data about security within the units managed by the PHA, including information on evictions and applicants denied admission. HUD’s information pertains to persons evicted or denied admission for reasons of criminal activity; these data do not distinguish between criminal activity and drug-related criminal activities. The HUD data also do not include measures of the number of tenants or of the total number of applicants screened. To adjust for differences in the size of the PHAs, we calculated a rate at which applicants were evicted or denied admission into public housing because of criminal activities that was based on the number of units maintained by all reporting PHAs. These data are reported in table 15. During each of the fiscal years, 2002 and 2003, there were more than 9,000 evictions (amounting to less than 1 percent of all units managed) because of criminal activities. There were about 49,000 applications for admission into public housing that were denied for reasons of criminal activities (amounting to about 4 percent of all units). As drug-related criminal activities are a subset of criminal activities, these data suggest that even if all of those evicted from public housing for reasons of criminal activity had engaged in drug-related criminal activities, terminations leading to evictions would amount to less than 1 percent of the public housing units managed by PHAs. To gauge the extent to which PHAs denied federally assisted housing by terminating leases (leading to possible evictions) for drug-related criminal activities, we contacted 40 of the largest PHAs in the country and asked them to provide data on the number of actions that they took to evict tenants by terminating leases and of these, the numbers that were terminated for criminal activity and for drug-related criminal activity. Of the 40 PHAs that we contacted, we received data from 17. We assessed the data that these PHAs provided for reliability. As shown in table 16, 15 of 17 PHAs that responded to our request provided data on the total number of public housing termination of leases. The rate at which PHAs terminated leases for reasons of drug-related criminal activities varied considerably, from 0 percent in Santa Clara County to 39.3 percent in Memphis. The Philadelphia PHA, which reported the largest number of lease terminations (2,324), reported terminating 50 of these leases (or 2.2 percent) for reasons of drug-related criminal activities. The Santa Clara County PHA terminated the smallest number of leases (1). Combined, the 13 PHAs that reported both the number of lease terminations and the number of terminations for drug-related criminal activities, reported ending a total of 9,249 leases, and 520 of the terminations (or 5.6 percent of the total) were for drug-related criminal activities. Further, although the data on lease terminations for reasons of drug- related criminal activities are not generalizable to all PHAs that manage public housing program units, the information that they provided on leases terminated for reasons of drug-related criminal activities, and our calculation of these numbers as a percentage of terminations for criminal activities, show wide variation in the extent to which drug-related criminal activities predominate among all criminal activities that can result in a termination of a lease. In Cuyahoga County, for example, 82.4 percent of lease terminations for criminal activity were terminations for drug-related criminal activities, but in Oakland, 20 percent of the terminations for criminal activity occurred as a result of drug-related criminal activities. A majority of the PHAs that reported these data also reported that the number of lease terminations and the reasons for them (i.e., criminal or drug-related criminal activities) were similar to or smaller than the numbers in the prior 3 years. As shown in table 17, 16 of the 17 PHA respondents were able to provide some (although often incomplete) data on actions taken to terminate HCV assistance during 2003. Nine of the 16 PHA respondents were able to provide data on the number of actions to terminate HCV assistance for reasons of drug-related criminal activity or criminal activity. However, only 5 of the 9 respondents were able to provide data on the number of actions specifically taken to terminate HCV assistance for reasons of drug- related criminal activity. These 5 PHAs took 9,537 actions related to terminating HCV assistance, of which 54 (or about 0.6 percent) of such actions were for terminating assistance for reasons of drug-related criminal activities. Four of the 9 PHA respondents were able to provide data on the number of actions taken to terminate HCV assistance for reasons of criminal activity, and most, but not all, of them provided (at our request) broad estimates for drug-related criminal activity based on the total number of actions to terminate HCV assistance during 2003. These 4 PHAs took a total of 3,166 actions related to terminating HCV assistance, of which 133 actions (or about 4.2 percent) were for terminating assistance because of criminal activities. Three of the 4 PHAs estimated less than 25 percent of their total actions could have been for reasons of drug-related criminal activities, and 1 PHA did not provide an estimate. Applying the upper-bound broad estimates (25 percent) to each PHA’s total actions would be overstating terminations for reasons of drug-related criminal activity because the resulting number is most likely to be equal to or be a subset of terminations for reasons of criminal activities. From a conservative perspective, it is conceivable that the 133 actions also represent terminations of assistance for reasons of drug-related criminal activity, thereby establishing a maximum rate of denial at 4.2 percent for reasons of drug-related criminal activity. Seven of the 16 PHA respondents provided only the total number of actions taken to terminate HCV assistance, along with a broad estimate of the percentages of terminations that could have been for reasons of drug- related criminal activity. Six of the 7 PHAs reported less than 25 percent of their total actions could have been for reasons of drug-related criminal activities, and one reported 51 to 75 percent. Table 17 provides the data on terminations of assistance from the HCV program. The majority of PHAs that reported data on terminations from the HCV program also reported that the number and types of actions that they took during 2003 were similar to the numbers in the prior 3 years. As shown in table 18, 15 of 17 PHAs that responded to our request provided data on the number of actions taken on applications for public housing. However, only 6 of the 15 respondents provided data on the number of actions specifically taken to deny admission into public housing for reasons of drug-related criminal activity. Collectively, these six PHAs took action on 11,538 applications, of which 330 (or about 2.9 percent of) such actions were for denying admissions for reasons of drug-related criminal activities. Nine of the 15 PHAs did not provide counts of the number of denials for reasons of drug-related criminal activity but provided data on the number of actions taken to deny admission for reasons of criminal activity. In completing our request, 4 PHAs provided broad estimates of denials for drug-related criminal activity based on the total number of actions taken on applications for public housing, and 5 PHAs did not provide estimates. Collectively, these 9 PHAs reported a total of 17,921 actions related to applications for admission into public housing, of which 1,081 actions (or 6 percent) were for denying admission for reasons of criminal activities. On the basis of our assumption that admission denials for reasons of drug- related criminal activity are most likely to be either a subset of or equivalent to the admission denials for reasons of criminal activities, we estimate that the maximum rate of denial for reasons of drug-related criminal activity for these PHAs is 6 percent. As with the other outcomes, the PHAs varied in the extent to which they reported that applicants were denied admission into public housing for reasons of drug-related criminal activities, and the majority of PHAs that provided data for 2003 reported that their activities related to actions and denials in 2003 were similar to the numbers in the prior 3 years. As shown in table 19, 14 of the 17 PHA respondents provided some (although mostly incomplete) data on actions taken on applications for the HCV program. Nine of the 14 PHAs provided data on the number of denials of admission into the program for reasons of drug-related criminal activities or criminal activity. Of the 2 PHAs that provided data on the number of denials for reasons of drug-related criminal activity, 1 PHA reported no denials, and the other PHA reported 10 denials out of 1,483 actions taken on applications, or 0.7 percent. Seven PHAs provided data on the number of denials for reasons of criminal activity. Among these 7 PHAs, there were a total of 20,513 reported actions taken on applicants. Of these, 303 were denied admission for reasons of criminal activities (or about 1.5 percent). On the basis of our assumption that admission denials for reasons of drug-related criminal activity are most likely to be either a subset of or equivalent to the admission denials for reasons of criminal activities, we estimate that the maximum for admission denials for reasons of drug-related criminal activity is 1.5 percent. We could not provide reliable estimates for the remaining 5 PHAs that reported incomplete data. Our review of limited data and interviews with those involved in federally assisted housing suggest a number of factors can contribute to the relatively low percentages of denials being reported for reasons of drug- related criminal activities. As noted in a HUD baseline study, variation among PHAs in conducting HCV criminal background checks could legitimately result in an applicant being barred by one PHA who would otherwise be admitted by another PHA. In addition, a HUD official suggested that the percentages of denials that were reported to us by selected PHAs can be influenced by whether (1) the PHAs place drug users at the bottom of their waiting lists, (2) PHAs differ in the treatment of applicants if a household member rather than the applicant is the subject of the drug-related criminal activity, and (3) local courts presiding over eviction proceedings view the PHAs as the housing provider of last resort. In the last instance, the PHA’s decision to terminate a lease for reason of drug-related criminal activity may not be upheld. Moreover, comments made during interviews with selected officials on matters related to housing were consistent with our analysis of HUD data on PHA denials for criminal activity, and the relatively low number of denials for drug-related criminal activity provided to us by selected PHAs. Regarding the relatively small number of persons whose housing benefits were reported as terminated or persons denied program participation for reasons of criminal or drug-related criminal activities, a representative from the National Association of Housing and Redevelopment Officials stated that PHAs are not looking to turn away minor offenders (e.g., “the type of people that may have only stolen a candy bar”) but rather hardened criminals. On the variation in denials of federally assisted housing for drug-related criminal activities, the Project Coordinator for the Re-Entry Policy Council at the Council of State Governments suggested that the barriers to housing ex-drug offenders revolve around the discretion afforded PHAs, and that these barriers can best be dealt with at the local level by making states more aware of the issue, the applicability of the local rules, and the need for building collegial relationships with PHAs to develop options for housing ex-felons. More generally, assessing the impacts of the denial of federal housing benefits on the housing communities or on the individuals and families that have lost benefits was beyond the scope of our review, given the limited data that are available. Officials at HUD reported that they have not studied this issue, and our review of the literature did not return any comprehensive studies of impacts. In our opinion, any full assessment of the impacts of denial of housing benefits to drug offenders would have to consider a wide range of possible impacts, such as improvements in public safety that result from terminating leases of drug offenders; displacement of crime from one area to another with perhaps no overall (or area-wide) improvements in crime reduction; as well as the impacts on individuals and families, to name a few. Any such impact assessments would be complicated by the market conditions (limited quantity and high demand) for federally assisted housing and the variation in PHAs’ policies and practices that would also need to be considered. We requested the PHAs to provide us with data on the race of persons who were denied federally assisted housing for reasons of drug-related criminal activities. Of the 17 PHAs that responded to our request, few provided data by race on (1) the total number of actions taken and (2) those actions that were specifically for drug-related criminal activity. Only 4 PHAs provided data by race on public housing terminations, and 3 PHAs provided data by race on public housing admission denials. Four PHAs provided data by race on HCV terminations of assistance. Only 1 PHA provided data by race on HCV admission denials. From these limited data, we were unable to develop reliable estimates of racial differences in the frequency of terminations and denials of admission into federally assisted housing. In some cases, the number reported as terminated for drug-related criminal activities was too small to provide stable estimates, and because of the small numbers, the estimates of racial differences could exhibit large changes with the addition of a few more cases. For example, only 4 PHAs provided data by race on the number of leases terminated for reasons of drug-related criminal activities. In 1 PHA, slightly more than 3 percent of all lease terminations of blacks were for drug-related criminal activities, while almost 6 percent of all lease terminations of whites were for drug-related criminal activities. In this PHA, whites were about one and one half times more likely than blacks to have their leases terminated for reasons of drug-related criminal activities. In this PHA, 110 whites had leases terminated during 2003, and 6 of these terminations were for drug-related criminal activities. In a second PHA, blacks were three times as likely as whites to have their leases terminated for reasons of drug-related criminal activities, as 18 percent of blacks and 5 percent of whites had leases terminated for reasons of drug related criminal activities. But in this second PHA, 19 whites had leases terminated, and 1 of these was for drug-related criminal activities. An addition of 2 whites to the number that had leases terminated for drug- related criminal activities would have almost eliminated the racial difference. The Denial of Federal Benefits Program established under section 5301 of the Anti-Drug Abuse Act of 1988, in general, provides federal and state court judges with a sentencing option to deny selected federal benefits to individuals convicted of federal or state offenses for the distribution or possession of controlled substances. The federal benefits that can be denied include grants, contracts, loans, and professional or commercial licenses provided by an agency of the United States. Certain benefits are excluded from deniability under this provision of the law; these include benefits such as social security, federally assisted housing, welfare, veterans’ benefits, and benefits for which payments or services are required for eligibility. Federally assisted housing, TANF, and food stamp benefits may be denied to drug offenders under other provisions of federal law. (See app. II for more information on the denial of TANF and food stamp benefits, and see app. IV for more information on the denial of federally assisted housing benefits.) Federal and state court sentencing judges generally have discretion to deny any of the deniable benefits for any length up to those prescribed by the law, with the exception of the mandatory denial of benefits required for a third drug trafficking conviction. More specifically, depending upon the type of offense, and conviction and the number of prior convictions, the law provides for different periods of ineligibility for which benefits can or must be denied. As the number of convictions for a particular type of drug offense increases, so does the period of ineligibility for which benefits can or must be denied. Table 20 shows these periods. With respect to first-time drug possession convictions, a court may impose certain conditions, such as the completion of an approved drug treatment program, as a requirement for the reinstatement of benefits. In addition, the sentencing court continues to have the discretion to impose other penalties and conditions apart from section 5301 of the Anti-Drug Abuse Act of 1988. Section 5301 of the Anti-Drug Abuse Act of 1988, as amended, also provides that under certain circumstances, the denial of benefits penalties may be waived or suspended with respect to certain offenders. For example, the denial of benefits penalties are not applicable to individuals who cooperate with or testify for the government in the prosecution of a federal or state offense or are in a government witness protection program. In addition, with respect to individuals convicted of drug possession offenses, the denial of benefits penalties are to be “waived in the case of a person who, if there is a reasonable body of evidence to substantiate such declaration, declares himself to be an addict and submits himself to a long-term treatment program for addiction, or is deemed to be rehabilitated pursuant to rules established by the Secretary of Health and Human Services.” Also, the period of ineligibility for the denial of benefits is to be suspended for individuals who have completed a supervised drug rehabilitation program, have otherwise been rehabilitated, or have made a good faith effort to gain admission to a supervised drug rehabilitation program but have been unable to do so because of inaccessibility or unavailability of such a program or the inability of such individuals to pay for such a program. State and federal sentencing judges generally have discretion to impose denial of federal benefits, under section 5301 of the Anti-Drug Abuse Act of 1988, as a sanction. This sanction can be imposed in combination with other sanctions, and courts have the option of denying all or some of the specified federal benefits and determining the length of the denial period within certain statutorily set ranges. When denial of benefits under section 5301 of the Anti-Drug Abuse Act of 1988 is part of a sentence, the sentencing court is to notify the Bureau of Justice Assistance, which maintains a database (the Denial of Federal Benefits Program Clearinghouse) of the names of persons who have been convicted and the benefits that they have been denied. BJA passes this information on to the U.S. General Services Administration (GSA), which maintains the debarment list for all agencies. GSA publishes the names of individuals who are denied benefits in the Lists of Parties Excluded from Federal Procurement or Nonprocurement Programs, commonly known as the Debarment List. The Debarment List contains special codes that indicate whether all or selected benefits have been denied for an individual and the expiration date for the period of denial. Before making an award or conferring a pertinent federal benefit, federal agencies are required to consult the Debarment List to determine if the individual is eligible for benefits. The Department of Justice also has data-sharing agreements with the Department of Education and the Federal Communications Commission. The purpose of these agreements is to provide these agencies with access to information about persons currently denied the federal benefits administered by them. For example, as described in this report, students who are convicted of offenses involving the sale or possession of a controlled substance are ineligible to receive certain federal postsecondary education benefits. In order to ensure that student financial assistance is not awarded to individuals subject to denial of benefits under court orders issued pursuant to section 5301, DOJ and the Department of Education implemented a computer matching program. The Department of Education attempts to identify persons who have applied for federal higher education assistance by matching records from applicants against the BJA database list of persons who have been denied benefits. Officials at the Department of Education report that the department has matched only a few records of applicants for federal higher education assistance to the DOJ list of persons denied federal benefits. The individuals whose names appear on the DOJ list may differ from those individuals who self- certify to a drug offense conviction on their applications for federal postsecondary education assistance. (See app. III for more information on this.) The Administrative Office of United States Courts is responsible for administrative matters for the federal courts. Shortly after the passage of the Anti-Drug Abuse Act of 1988, AOUSC added the Denial of Federal Benefits sentence enhancement to the Pre-Sentence Report Monograph, which provided information to probation officers about the availability of the DFB as sanction along with its requirements. AOUSC also developed a standard form for federal judges to use in reporting the imposition of the Denial of Federal Benefits sanctions; the form is part of the Judgment and Commitment Order that is completed by the court upon sentencing. The United States Sentencing Commission promulgates federal sentencing guidelines and collects data on all persons sentenced pursuant to the federal sentencing guidelines. After the passage of the Anti-Drug Abuse Act of 1988, the USSC prepared a guideline for this sanction and included it in the Sentencing Guidelines Manual. Annually, USSC distributes the Sentencing Guidelines Manual to federal court officials. Bureau of Justice Assistance data show that between 1990 and the second quarter of 2004, 8,298 offenders were reported as having been denied federal benefits by judges who imposed sanctions under the Denial of Federal Benefits Program. About 38 percent (or 3,128) of these offenders were denied benefits in state courts, and about 62 percent (or 5,170) were denied benefits in federal courts. About an average 635 persons per year were denied benefits under the program over the 1992 through 2003 period, and the number denied in any given year ranged from about 428 to 833. The number denied a benefit under the program decreased to 428 in 2002 and increased to 595 in 2003. According to BJA data, judges in comparatively few courts used the denial of federal benefits provisions. State court judges in 7 states and federal judges in judicial districts in 26 states were reported to have imposed the sanction. Among state courts, judges in Texas accounted for 39 percent of the state court totals, while judges in Oregon and Rhode Island accounted for 30 percent and 13 percent, respectively. Among the federal courts, judges in judicial districts in Texas accounted for 21 percent of the federal totals, while judges in North Carolina, Mississippi, Georgia, Florida, Nevada, and Kansas accounted for between 8 percent and 15 percent of the totals. Federal judges in each of the remaining 19 states accounted for less than 3 percent of the federal totals. Not all of the 8,298 offenders recorded as having been denied federal benefits between 1990 and 2004 under the program are currently denied benefits. For about 75 percent of these offenders, the period of denial has expired. Officials at BJA report that as of April 2004, they maintained about 2,000 active records of persons currently under a period of denial. Relative to the total number of felony drug convictions, the provisions of the Denial of Federal Benefits (DFB) Program are reportedly used in a relatively small percentage of drug cases. For example, biannually between 1992 and 2000, there were a minimum of 274,000 and a maximum of 348,000 convictions for drug offenses in state courts, or about 307,000 per year. In federal courts over this same period, there were between 15,000 and 24,000 drug offenders convicted, or about 19,000 per year. As the average annual number of drug defendants in state courts denied benefits under the DFB was 223, the rate of use of the DFB in state courts averaged about 0.07 percent. Among federal drug defendants, the annual average number reported as having received a sanction under the program was about 369, while the average annual number of drug defendants sentenced federally was about 19,000; hence, the percentage of all federal drug defendants receiving a sanction under the program was about 2 percent. Throughout the history of the program, questions have been raised about its apparently limited impacts. In 1992, we reported on the difficulties in denying federal benefits to convicted drug offenders and suggested that there would not be widespread withholding of federal benefits from drug offenders. Officials at BJA also reported that the sanction has not been widely used by judges. In 2001, BJA program managers met with some U.S. attorneys in an attempt to provide them with information about the potential benefits of the program. According to BJA officials, the U.S. attorneys responded that they typically used other statutes for sanctioning and sentencing drug offenders, rather than the sanctions under the Denial of Federal Benefits Program. The benefits that can be denied under the program—federal contracts, grants, loans, and professional or commercial licenses—suggest some reasons as to its relatively infrequent use. Persons engaged in federal contracting, for example, are generally engaged in business activities, and such persons compose small percentages of federal defendants sentenced for drug offenses. Hence, relatively few defendants may qualify to use these federal benefits, and therefore relatively few may be denied the benefits. None of the data sources that we reviewed provided reliable data on the race and ethnicity of persons denied federal benefits under the Denial of Federal Benefits Program. In addition to the contact named above, William J. Sabol, Clarence Tull, Brian Sklar, DuEwa Kamara, Geoffrey Hamilton, David Makoto Hudson, Michele Fejfar, David Alexander, Amy Bernstein, Anne Laffoon, Julian King and Andrea P. Smith made key contributions to this report. | Several provisions of federal law allow for or require certain federal benefits to be denied to individuals convicted of drug offenses in federal or state courts. These benefits include Temporary Assistance for Needy Families (TANF), food stamps, federally assisted housing, postsecondary education assistance, and some federal contracts and licenses. Given the sizable population of drug offenders in the United States, the number and the impacts of federal denial of benefit provisions may be particularly important if the operations of these provisions work at cross purposes with recent federal initiatives intended to ease prisoner reentry and foster prisoner reintegration into society. GAO analyzed (1) for selected years, the number and percentage of drug offenders that were estimated to be denied federal postsecondary education and federally assisted housing benefits and federal grants, contracts, and licenses and (2) the factors affecting whether drug offenders would have been eligible to receive TANF and food stamp benefits, but for their drug offense convictions, and for a recent year, the percentage of drug offenders released who would have been eligible to receive these benefits. Several agencies reviewed a draft of this report, and we incorporated the technical comments that some provided into the report where appropriate. For the years for which it obtained data, GAO estimates that relatively small percentages of applicants but thousands of persons were denied postsecondary education benefits, federally assisted housing, or selected licenses and contracts as a result of federal laws that provide for denying benefits to drug offenders. During academic year 2003-2004, about 41,000 applicants (or 0.3 percent of all applicants) were disqualified from receiving postsecondary education loans and grants because of drug convictions. For 2003, 13 of the largest public housing agencies in the nation reported that less than 6 percent of 9,249 lease terminations that occurred in these agencies were for reasons of drug-related criminal activities--such as illegal distribution or use of a controlled substance--and 15 large public housing agencies reported that about 5 percent of 29,459 applications for admission were denied admission for these reasons. From 1990 through the second quarter of 2004, judges in federal and state courts were reported to have imposed sanctions to deny benefits such as federal licenses, grants, and contracts to about 600 convicted drug offenders per year. Various factors affect which convicted drug felons are eligible to receive TANF or food stamps. This is because state of residence, income, and family situation all play a role in determining eligibility. Federal law mandates that convicted drug felons face a lifetime ban on receipt of TANF and food stamps unless states pass laws to exempt some or all convicted drug felons in their state from the ban. At the time of GAO's review, 32 states had laws exempting some or all convicted drug felons from the ban on TANF, and 35 states had laws modifying the federal ban on food stamps. Because of the eligibility requirements associated with receiving these benefits, only those convicted drug felons who, but for their conviction, would have been eligible to receive the benefits could be affected by the federal bans. For example, TANF eligibility criteria include requirements that an applicant have custodial care of a child and that income be below state-determined eligibility thresholds. Available data for 14 of 18 states that fully implemented the ban on TANF indicate that about 15 percent of drug offenders released from prison in 2001 met key eligibility requirements and constitute the pool of potentially affected drug felons. Proportionally more female drug felons than males may be affected by the ban, as about 27 percent of female and 15 percent of male drug offenders released from prison in 2001 could be affected. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD defines a UAV as a powered aerial vehicle that does not carry a human operator; can be land-, air-, or ship-launched; uses aerodynamic forces to provide lift; can be autonomously or remotely piloted; can be expendable or recoverable; and can carry a lethal or nonlethal payload. Generally, UAVs consist of the aerial vehicle; a flight control station; information and retrieval or processing stations; and, sometimes, wheeled land vehicles that carry launch and recovery platforms. UAVs have been used in a variety of forms and for a variety of missions for many years. After the Soviet Union shot down a U-2 spy plane in 1960, certain UAVs were developed to monitor Soviet and Chinese nuclear testing. Israel used UAVs to locate Syrian radars and was able to destroy the Syrian air defense system in Lebanon in 1982. The United States has used UAVs in the Persian Gulf War, Bosnia, Operation Enduring Freedom, and Operation Iraqi Freedom for intelligence, surveillance, and reconnaissance missions and to attack a vehicle carrying suspected terrorists in Yemen in 2002. The United States is also considering using UAVs to assist with border security for homeland security or homeland defense. The current generation of UAVs has been under development for defense applications since the 1980s. UAVs won considerable acceptance during military operations in Afghanistan and Iraq in 2002 and 2003, respectively. They were used in these operations to observe, track, target, and in some cases strike enemy forces. These and similar successes have heightened interest in UAVs within DOD and the services. In fact, by 2010, DOD plans to have at least 14 different UAVs in the force structure to perform a variety of missions. Moreover, in the fiscal year 2001 National Defense Authorization Act, Congress established the goal that one-third of the Air Force’s deep-strike capability be provided by UAVs by 2010. The overall management of UAV programs has gone full circle. In 1989 the DOD Director of Defense Research and Engineering set up the UAV Joint Project Office as a single DOD organization with management responsibility for UAV programs. With the Navy as the Executive Agency, within 4 years the Joint Project Office came under criticism for a lack of progress. Replacing the office in 1993, DOD created the Defense Airborne Reconnaissance Office as the primary management oversight and coordination office for all departmentwide manned and unmanned reconnaissance. In 1998, however, this office also came under criticism for its management approach and slow progress in fielding UAVs. In that same year, this office was dissolved and UAV program development and acquisition management was given to the services, while the Assistant Secretary of Defense for Command, Control, Communications and Intelligence was assigned to provide oversight for the Secretary of Defense. Our report being issued today (Force Structure: Improved Strategic Planning Can Enhance DOD’s Unmanned Aerial Vehicles Efforts, GAO- 04-342, Mar. 17, 2004) analyzes recent funding trends for UAVs and makes recommendations to strengthen DOD’s strategic planning and management approach for UAVs. During the past 5 fiscal years, Congress provided funding for UAV development and procurement that exceeds the amounts requested by DOD, and to date the services have obligated about 99 percent of these funds. To promote the rapid employment of UAVs, Congress appropriated nearly $2.7 billion to develop and acquire UAVs from fiscal year 1999 through fiscal year 2003, compared with the $2.3 billion requested by DOD. The majority of the funds—$1.8 billion (67 percent)—have been for UAV research, development, test, and evaluation. Figure 1 displays the trends in research, development, test, and evaluation and procurement funding from fiscal year 1999 through fiscal year 2003. Over these 5 years, only three systems—the Air Force’s Predator and Global Hawk, and the Army’s Shadow—have matured to the point that they required procurement funding, amounting to about $880 million by fiscal year 2003 and another estimated $938 million needed by fiscal year 2005. Because Congress has appropriated more funds than requested, the services are able to acquire systems at a greater rate than planned. For example, in fiscal year 2003, the Air Force requested $23 million to buy 7 Predator UAVs, but Congress provided over $131 million, enough to buy 29 Predators. The Air Force had obligated 71 percent of the Predator’s fiscal year 2003 funding during its first program year. The Hunter, Predator, Pioneer, and Shadow are among the UAV systems currently being used, and therefore we determined the level of DOD’s operations and maintenance spending from fiscal year 1999 through fiscal year 2003 for these systems. Operations and maintenance funding has steadily increased over that period from about $56.6 million for three of the systems to $155.2 million in 2003 for all four. These increases are the result of a larger inventory of existing systems and the introduction of new systems. Figure 2 displays the operations and maintenance spending for these UAV systems for fiscal years 1999 to 2003. DOD has taken certain positive steps to improve the management of the UAV program by establishing a program focal point in the joint UAV Planning Task Force and trying to communicate a common vision for UAV development, the UAV Roadmap. While the creation of the Task Force and the UAV Roadmap are important steps to improve the management of the program, they are not enough to reasonably assure that DOD is developing and fielding UAVs efficiently. The Task Force’s authority is generally limited to program review and advice, but is insufficient to enforce program direction. Moreover, the UAV Roadmap does not constitute a comprehensive strategic plan for developing and integrating UAVs into force structure. Since 2000, DOD has taken several positive steps to improve the management of the UAV program. In October 2001, the Under Secretary of Defense for Acquisition, Technology, and Logistics created the joint UAV Planning Task Force as the joint advocate for developing and fielding UAVs. The Task Force is the focal point to coordinate UAV efforts throughout DOD, helping to create a common vision for future UAV- related activities and to establish interoperability standards. For example, the Task Force is charged with developing and coordinating detailed UAV development plans, recommending priorities for development and procurement efforts, and providing the services and defense agencies with implementing guidance for common UAV programs. The development of the 2002 Roadmap has been the Task Force’s primary product to communicate its vision and promote interoperability. The Roadmap is designed to guide U.S. military planning for UAV development through 2027, and describes current programs, identifies potential missions, and provides guidance on developing emerging technologies. The Roadmap is also intended to assist DOD decision makers to build a long-range strategy for UAV development and acquisition in such future planning efforts as the Quadrennial Defense Review or other planning efforts. The joint UAV Planning Task Force’s authority is generally limited to program review and advice, but is insufficient to enforce program direction. The Task Force Director testified before the House Armed Services Committee in March 2003 that the Task Force does not have program directive authority, but provides the Under Secretary of Defense for Acquisition, Technology, and Logistics with advice and recommended actions. Without such authority, according to the Director, the Task Force seeks to influence services’ programs by making recommendations to them or proposing recommended program changes for consideration by the Under Secretary. According to defense officials, the Task Force has attempted to influence the joint direction of service UAV efforts in a variety of ways, such as reviewing services’ budget proposals, conducting periodic program reviews, and participating in various UAV-related task teams and has had some successes, as shown below: The Task Force has encouraged the Navy to initially consider an existing UAV (Global Hawk) rather than develop a unique UAV for its Broad Area Marine Surveillance mission. The Task Force has worked with the Army’s tactical UAV program to encourage it to consider using the Navy’s Fire Scout as an initial platform for the Future Combat System class IV UAV. The Task Force convinced the Air Force to continue with the Unmanned Combat Aerial Vehicle program last year when the Air Force wanted to terminate it, and the Task Force ultimately helped the then-separate Air Force and Navy programs merge into a joint program. The Task Force convinced the Navy not to terminate the Fire Scout rotary wing UAV program as planned. However, the Task Force cannot compel the services to adopt any of its suggestions and consequently has not always succeeding in influencing service actions. For example, according to DOD officials, no significant progress has been made in achieving better interoperability among the services in UAV platform and sensor coordination, although efforts are continuing in this vein. Neither the Roadmap nor other DOD guidance documents represent a comprehensive strategy to guide the development and fielding of UAVs that complement each other, perform the range of missions needed, and avoid duplication. DOD officials acknowledged that the Office of the Secretary of Defense has not issued any guidance that establishes an overall strategy for UAVs in DOD. While high-level DOD strategic-planning documents—such as the National Military Strategy, the Joint Vision 2020, and the Defense Planning Guidance—provide some general encouragement to pursue transformational technologies, including the development of UAVs, these documents do not provide any specific guidance on developing and integrating UAVs into the force structure. At the same time, while the Joint Requirements Oversight Council has reviewed several UAVs and issued guidance for some systems, neither the Joint Staff nor the council has issued any guidance that would establish a strategic plan or overarching architecture for DOD’s current and future UAVs. In June 2003, the Chairman of the Joint Chiefs of Staff created the Joint Capabilities Integration and Development System to provide a top- down capability-based process. Under the system, five boards have been chartered, each representing a major warfighting capability area as follows: (1) command and control, (2) force application, (3) battle space awareness, (4) force protection, and (5) focused logistics. Each board has representatives from the services, the combatant commanders, and certain major functions of the Under Secretary of Defense. Each board is tasked with developing a list of capabilities needed to conduct joint operations in its respective functional areas. The transformation of these capabilities is expected, and the boards are likely to identify specific capabilities that can be met by UAVs. Nonetheless, according to Joint Staff officials, these initiatives will not result in an overarching architecture for UAVs. However, the identification of capabilities that can be met by UAVs is expected to help enhance the understanding of DOD’s overall requirement for UAV capabilities. Moreover, according to officials in the Office of the Secretary of Defense, the UAV Roadmap was not intended to provide an overarching architecture for UAVs. The Roadmap does state that it is intended to assist DOD decision makers in building a long-range strategy for UAV development and acquisition in such future planning efforts as the Quadrennial Defense Review. Nonetheless, the Roadmap represents a start on a strategic plan because it incorporates some of the key components of strategic planning, as shown below: Long-term goals—The Roadmap states its overall purpose and what it hopes to encourage the services to attain. The Roadmap refers to the Defense Planning Guidance’s intent for UAVs as a capability and indicates that the guidance encourages the rapid advancement of this capability. At the same time, it does not clearly state DOD’s overall or long-term goals for its UAV efforts. Similarly, while it states that it wants to provide the services with clear direction, it does not clearly identify DOD’s vision for its UAV force structure through 2027. Approaches to obtain long-term goals—The Roadmap’s “Approach” section provides a strategy for developing the Roadmap and meeting its goal. This approach primarily deals with identifying requirements and linking them to needed UAV payload capabilities, such as sensors and associated communication links. The approach then ties these requirements to forecasted trends in developing technologies as a means to try to develop a realistic assessment of the state of the technology in the future and the extent to which this technology will be sufficient to meet identified requirements. At the same time, however, the Roadmap does not provide a clear description of a strategy for defining how to develop and integrate UAVs into the future force structure. For example, the Roadmap does not attempt to establish UAV development or fielding priorities, nor does it identify the most urgent mission-capability requirements. Moreover, without the sufficient identification of priorities, the Roadmap cannot link these priorities to current or developing UAV programs and technology. Performance goals—The Roadmap established 49 specific performance goals for a variety of tasks. Some of these goals are aimed at fielding transformational capabilities without specifying the missions to be supported. Others are to establish joint standards and control costs. Nonetheless, of the 49 goals, only 1 deals directly with developing and fielding a specific category of UAV platform to meet a priority mission-capability requirement— the suppression of enemy air defenses or strike electronic attack. The remaining goals, such as developing heavy-fuel aviation engines suitable for UAVs, are predominantly associated with developing UAV or related technologies as well as UAV-related standards and policies to promote more efficient and effective joint UAV operations. However, the Roadmap does not establish overall UAV program goals. Performance indicators—Some of the 49 goals have performance indicators that could be used to evaluate progress, while others do not. Furthermore, the Roadmap does not establish indicators that readily assess how well the program will meet the priority mission capabilities. As the services and defense agencies pursue separate UAV programs, they risk developing systems with duplicate capabilities, potentially higher operating costs, and increased interoperability challenges. The House Appropriations Committee was concerned that without comprehensive planning and review, there is no clear path toward developing a UAV force structure. Thus, the committee directed that each service update or create a UAV roadmap. These roadmaps were to address the services’ plans for the development of future UAVs and how current UAVs are being employed. Officials from each of the services indicated that their UAV roadmap was developed to primarily address their individual service’s requirements and operational concepts. However, in their views, such guidance as the Joint Vision 2020, National Military Strategy, and Defense Planning Guidance did not constitute strategic plans for UAVs to guide the development of their individual service’s UAV roadmap. These officials further stated that the Office of the Secretary of Defense’s 2002 UAV Roadmap provided some useful guidance, but was not used to guide the development of the service’s UAV roadmaps. Moreover, they did not view the Office of the Secretary of Defense’s Roadmap as either a DOD-wide strategic plan or an overarching architecture for integrating UAVs into the force structure. According to service officials developing the service-level UAV roadmaps, there was little collaboration with other services’ UAV efforts. As we have described for you today, DOD has an opportunity to enhance its strategic planning to improve the management of UAV development and fielding. In the report released to you today, we make two recommendations to assist DOD to enhance its management control over the UAV program. We recommend that DOD establish a strategic plan or set of plans based on mission requirements to guide UAV development and fielding. We also recommend that DOD designate the joint UAV Planning Task Force or another appropriate organization to oversee the implementation of a UAV strategic plan. In responding to our report, DOD stated that it partially concurred with the first recommendation but preferred to address UAV planning through the Joint Capabilities Integration and Development System process. DOD disagreed with the second recommendation saying that it did not need to provide an organization within the department with more authority because it believes that the Undersecretary of Defense for Acquisition, Logistics, and Technology already has sufficient authority to achieve DOD’s UAV goals. Our report states clearly that we continue to support both recommendations. We believe that the growth in the number and cost of UAV programs, and their importance to military capabilities, will need more formalized oversight by DOD. Our reviews of system development efforts over the last several decades show that the road to fielding operational UAVs has not been easy. Success has been achieved as a result of intervention by leadership and the use of innovative processes. Even when put on a sound footing, these programs have continued to face new challenges. In the future, UAVs will be growing in number, sophistication, and significance, but will also have to compete for increasingly scarce funds, electromagnetic frequency spectrum, and airspace. Since the mid 1970s, we have reviewed many individual DOD UAV development efforts. A list of our reports is attached in the section entitled “Related GAO Products.” Our previous work has highlighted problems that addressed congressional efforts to bring the development process under control and subsequently led to the termination or redesign and retrofit of a number of these development efforts. In 1988 we reported on a variety of management challenges related to UAV development. At that time, congressional committees had expressed concern about duplication in the services’ UAV programs, which ran counter to the committees’ wishes that DOD acquire UAVs to meet common service needs. In 1988, we noted that DOD was to provide, at minimum, a UAV master plan that (1) harmonized service requirements, (2) utilized commonality to the maximum extent possible, and (3) made trade-offs between manned and unmanned vehicles in order to provide future cost savings. After budget deliberations for fiscal year 1988, Congress eliminated separate service accounts for individual UAV programs and consolidated that funding into a single Defense Agencies account. This in turn led to the formation of DOD’s UAV Joint Projects Office, which promoted joint UAV efforts that would prevent unnecessary duplication. This effort was led by the Defense Airborne Reconnaissance Office within the Office of the Secretary of Defense, which has since been disbanded. Our analysis of DOD’s 1988 UAV master plan identified a number of weaknesses: (1) it did not eliminate duplication, (2) it continued to permit the proliferation of single-service programs, (3) it did not adequately consider cost savings potential from manned and unmanned aircraft trade- offs, and (4) it did not adequately emphasize the importance of common payloads among different UAV platforms. In testimony presented in April 1997, we recognized the strong support that Congress had provided for DOD’s UAV acquisition efforts and how it had encouraged the department to spur related cooperation between the services. We noted that problems with UAV development continued and were leading to cost, schedule, and performance deficiencies; continued duplication of UAV capabilities; and even program cancellations in many instances. In 1997, only one UAV—the Pioneer—had been fielded. Since 1997, we have continued to evaluate the department’s UAV development efforts, including plans to develop a lethal variant of UAVs called unmanned combat air vehicles. Our reviews over the last 27 years have revealed several reasons why UAV efforts have not been successful, including requirements that outstrip technology, overly ambitious schedules, and difficulties integrating UAV components and UAV testing. We have also found that UAV system acquisitions processes were not protected from what is known as “requirements creep.” These requirements changes increase development and procurement costs significantly. For example: The Aquila was started in 1979 with a straightforward mission to provide small, propeller-driven UAVs to give group commanders real- time battlefield information about enemy forces beyond ground observers’ line of sight. Requirements creep increased complexity and development and anticipated procurement costs significantly. For example, in 1982 a requirement for night vision capability was added which increased development costs due to the additional payloads and air vehicles needed to meet the new requirement. During operational tests, the Aquila successfully fulfilled all requirements in only 7 of 105 flights. When the Air Force’s Global Hawk reconnaissance UAV was started in 1994, it was expected to have an average unit flyaway price of $10 million. Changes in the aircraft’s range and endurance objectives required the contractor to modify the wings and other structural parts, and by 1999 its cost had increased by almost 50 percent. In our April 2000 report, we concluded that the cost of air vehicles to be produced could increase still further, because the Air Force had not finalized its design requirements. In 2002, the Global Hawk program adopted a higher-risk strategy that calls for both a larger, more advanced aircraft and an accelerated delivery schedule. In June 2003 we reported that the original requirements for the Air Force’s unmanned combat air vehicle (UCAV) program posed significant, but manageable challenges to build an air vehicle that is affordable throughout its life cycle, highly survivable, and lethal. Subsequently, however, the Air Force added requirements—adding a mission and increasing flying range. This action widened the gap between requirements and resources and increased the challenge for the development program. Aside from the air vehicle, other ground and airborne systems are also needed for the UAV to be complete. DOD’s practice of buying systems before successful completion of testing has repeatedly led to defective systems that were terminated, redesigned, or retrofitted to achieve satisfactory performance. Our reviews have shown that, before production begins, DOD needs to test to ensure that all key parts of the UAV system can work successfully together, and that it can be operated and maintained affordably throughout its lifecycle. In March 1999, we examined the Medium Range UAV, which began in 1989 as a joint effort of the Navy and Air Force. The Air Force was to design and build the sensor payload, including cameras, a videotape recorder, and a communications data link that would send back the imagery from the UAV. The Navy was to design and build the air vehicle. Splitting and then integrating these development efforts became problematic. The Air Force ran into major payload development difficulties, which impacted payload development costs. As a result of the difficulties, the payload program fell behind schedule, developmental tests on a surrogate manned aircraft were unsuccessful, and the payload was too big to fit in the space the Navy had allotted inside the aircraft. In 1993, the program was terminated. In 1999, the Army began low-rate initial production of four Shadow systems at the same time that it began the engineering and manufacturing development phase. In February 2001, the Army sought to revise its acquisition strategy to procure four additional Shadow systems before conducting operational tests. We recommended in a 2000 report that the Army not buy these four additional systems until after operational testing is completed. In our opinion, only operational testing of the system in a realistic environment can show whether the overall system would meet the Army’s operational needs. Subsequently, we reported that problems encountered during early tests forced the program to delay completion of operational testing by one year. The results of operational tests revealed that the Shadow was not operationally suitable, survivable, and may not be affordable. Our body of UAV work also made several observations about factors that contribute to success, including the use of innovative approaches and high-level interventions by individuals and organizations. In August 1999, we concluded that DOD’s use of Advanced Concept Technology Demonstration projects improved UAV acquisitions because it focused on maturing technology and proving military utility before committing to a UAV. We found that DOD’s Advanced Concept Technology Demonstration approach was consistent with the practices that we typically characterize as leading commercial development efforts. Predator UAV used a 30-month Advanced Concept Technology Demonstration approach and prototypes were deployed in Bosnia in 1995 and 1996 as part of the demonstration. Performance data gathered there convinced military users that Predator was worth acquiring. High-level individuals intervened to set resource constraints and encouraged evolutionary acquisition strategies on the Air Force’s Global Hawk, the Army’s Shadow UAV, and the Joint Unmanned Combat Air System programs. In the initial Shadow program, the Army’s top military acquisition executive reached an agreement with his counterpart in the requirements community that limited the program to “must have” capabilities and restrained resources such as cost. This resulted in the need to make trade-offs—so the Army lowered the performance requirement for the imagery sensor so that existing technology could be used. In the Global Hawk program, the Under Secretary of Defense (Acquisition, Technology, and Logistics) became personally involved and insisted that the program take an evolutionary approach, developing and fielding different versions of increasingly capable UAVs. He also placed cost constraints on the initial version, which enabled more advanced imagery sensor capabilities to be deferred for later versions of the UAV. In our report on the Unmanned Combat Air Vehicle program, we reported on Air Force plans to have initial deliveries of a lethal-strike- capable aircraft by 2011. The Air Force had abandoned the Unmanned Combat Air Vehicle initial low-risk approach to development, and increased requirements and accelerated its program schedule shortly before it was to shift to the product development stage. As previously reported, it took intervention by the Office of the Secretary of Defense to resolve requirements and funding challenges and maintain strong oversight over the program. The Task Force also was instrumental in getting the funding restored to the program, creating a joint effort between the Air Force and Navy, and accelerating the Navy’s version. Their strong oversight and intervention might have saved the program, which is now known as the Joint Unmanned Combat Air System program. Over the next decade, DOD plans show that UAV investments will increase, greater numbers will be fielded, and these systems will play more significant roles than in the past. In addition to overcoming the problems and pressures that have impaired past programs, managers of future UAV programs will face increasing competition for money, electromagnetic frequency spectrum bandwidth, and airspace. By 2010, DOD plans to invest $11 billion in UAV acquisitions, quadrupling the number of systems in its inventory today. As UAV programs vie for increased funding, they will have to compete against very large programs, such as the F/A-22 and the Joint Strike Fighter. If the costs of acquisition programs continue to exceed what has been set aside in the budget, competition will intensify and funding could be jeopardized. Initially, UAVs were seen as complementary systems that augmented capabilities the warfighter already had. They were, in a sense, “another pair of eyes.” We are already seeing the evolution of UAVs into more significant roles, for which they provide primary capability. For example, the Global Hawk is being seen as replacing the U-2 reconnaissance aircraft, and the Unmanned Combat Air Vehicle may eventually perform electronic warfare missions that the EA-6 Prowler aircraft performs today. UAVs are figuring prominently in plans to transform the military into a more strategically responsive force. UAVs are expected to be an integral part of this information-based force. For example, UAVs may serve as relay nodes in the Future Combat System’s command and control network. As UAVs perform increasingly significant roles, their payloads and designs will likely become more sophisticated. UAVs depend on the available space in the electromagnetic frequency spectrum to send and receive signals. Such signals are essential to UAV control, communications, and imagery. As the number of UAVs grows, the systems will have to compete for more room on the spectrum. Spectrum resources are scarce and facing increased demands from sources other than UAVs. Because of the changing nature of warfighting, more and more military systems are coming to depend on the spectrum to guide precision weapons and obtain information superiority. Recently, because of advances in commercial technology, a competition for scarce frequency spectrum has developed between government and nongovernment users. Moreover, as the growing number of UAV systems become available for military units and civilian agencies, such as the Department of Homeland Security, their operation will also need to be integrated into the national airspace system. Currently, the Federal Aviation Administration requires detailed coordination and approval of UAV flights in the national airspace system. The Federal Aviation Administration and DOD are working on how to better integrate military UAVs within the national air space system. In the future, UAVs are going to be used for homeland security, and their acceptance into civil airspace may be difficult to accomplish until significant work is accomplished in the areas of reliability, regulation, communications, and collision avoidance. Recent operations are convincing military commanders that UAVs are of real value to the warfighter. That success on the battlefield is leading to more and more demand for UAVs and innovative ways of using them, creating pressures such as a greater need for interoperability of systems and competition for limited resources like money, electromagnetic frequency spectrum, and airspace. The UAVs that are successful today survived an environment characterized by a number of canceled programs, risky strategies, uncoordinated efforts, and uncertain funding. It took additional measures for them to succeed, not the least of which was strong management intervention. In recent years, DOD has taken positive steps to better manage the development of UAVs by creating the joint UAV Planning Task Force and the UAV Roadmap. The question is whether these steps will be sufficient to make the most out of current and future investments in UAVs. We believe that DOD should build on these good steps so that it will be in a better position to provide stewardship over these investments. Taking these steps will give Congress confidence that its investments’ in the technology will produce optimum capabilities desired of UAVs. - - - - - Mr. Chairman, this concludes our prepared statement. We would be happy to answer any questions that you or Members of the subcommittee may have. For future questions about this statement, please contact Mr. Curtin at (202) 512-4914, Mr. Francis at (202) 512-2811, or Brian J. Lepore at (202) 512-4523. Individuals making key contributions to this statement include Fred S. Harrison, Lawrence E. Dixon, James K. Mahaffey, James A. Driggins, Jerry W. Clark, Jose Ramos, Jr., R.K. Wild, Bob Swierczek, and Kenneth E. Patton. Force Structure: Improved Strategic Planning Can Enhance DOD’s Unmanned Aerial Vehicles Efforts. GAO-04-342. Washington, D.C.: March 17, 2004. Nonproliferation: Improvements Needed for Controls on Exports of Cruise Missile and Unmanned Aerial Vehicles. GAO-04-493T. Washington, D.C.: March 9, 2004. Nonproliferation: Improvements Needed to Better Control Technology Exports for Cruise Missiles and Unmanned Aerial Vehicles. GAO-04-175. Washington, D.C.: January 23, 2004. Defense Acquisitions: Matching Resources with Requirements Is Key to the Unmanned Combat Air Vehicle Program’s Success. GAO-03-598. Washington, D.C.: June 30, 2003. Unmanned Aerial Vehicles: Questionable Basis for Revisions to Shadow 200 Acquisition Strategy. GAO/NSIAD-00-204. Washington, D.C.: September 26, 2000. Unmanned Aerial Vehicles: Progress of the Global Hawk Advanced Concept Technology Demonstration. GAO/NSIAD-00-78. Washington, D.C.: April 25, 2000. Unmanned Aerial Vehicles: DOD’s Demonstration Approach Has Improved Project Outcomes. GAO/NSIAD-99-33. Washington, D.C.: August 30, 1999. Unmanned Aerial Vehicles: Progress toward Meeting High Altitude Endurance Aircraft Price Goals. GAO/NSIAD-99-29. Washington, D.C.: December 15, 1998. Unmanned Aerial Vehicles: Outrider Demonstrations Will Be Inadequate to Justify Further Production. GAO/NSIAD-97-153. Washington, D.C.: September 23, 1997. Unmanned Aerial Vehicles: DOD’s Acquisition Efforts. GAO/ T-NSIAD- 97-138. Washington, D.C.: April 9, 1997. Unmanned Aerial Vehicles: Hunter System Is Not Appropriate for Navy Fleet Use. GAO/NSIAD-96-2. Washington, D.C.: December 1, 1995. Unmanned Aerial Vehicles: Performance of Short Range System Still in Question. GAO/NSIAD-94-65. Washington, D.C.: December 15, 1993. Unmanned Aerial Vehicles: More Testing Needed Before Production of Short Range System. GAO/NSIAD-92-311. Washington, D.C.: September 4, 1992. Unmanned Aerial Vehicles: Medium Range System Components Do Not Fit. GAO/NSIAD-91-2. Washington, D.C.: March 25, 1991. Unmanned Aerial Vehicles: Realistic Testing Needed Before Production of Short Range System. GAO/NSIAD-90-234. Washington, D.C.: September 28, 1990. Unmanned Vehicles: Assessment of DOD’s Unmanned Aerial Vehicle Master Plan. GAO/NSIAD-89-41BR. Washington, D.C.: December 9, 1988. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The current generation of unmanned aerial vehicles (UAVs) has been under development since the 1980s. UAVs were used in Afghanistan and Iraq in 2002 and 2003 to observe, track, target, and strike enemy forces. These successes have heightened interest in UAVs within the Department of Defense (DOD). Congress has been particularly interested in DOD's approach to managing the growing number of UAV programs. GAO was asked to summarize (1) the results of its most current report on DOD's approach to developing and fielding UAVs1 and the extent to which the approach provides reasonable assurance that its investment will lead to effective integration of UAVs into the force structure, and (2) the major management issues GAO has identified in prior reports on UAV research and development. GAO's most recent report points out that while DOD has taken some positive steps, its approach to UAV planning still does not provide reasonable assurance that the significant Congressional investment in UAVs will result in their effective integration into the force structure. In 2001, DOD established the joint UAV Planning Task Force in the Office of the Secretary of Defense to promote a common vision for UAV-related efforts and to establish interoperability standards. To communicate its vision and promote UAV interoperability, the task force issued the 2002 UAV Roadmap. While the Roadmap provides some strategic guidance for the development of UAV technology, neither the Roadmap nor other documents represent a comprehensive strategic plan to ensure that the services and other DOD agencies focus development efforts on systems that complement each other, will perform the range of priority missions needed, and avoid duplication. Moreover, the Task Force has only advisory authority and, as such, cannot compel the services to adopt its suggestions. GAO's prior work supports the need for effective oversight of individual UAV programs at the departmental level. UAVs have suffered from requirements growth, risky acquisition strategies, and uncertain funding support within the services. Some programs have been terminated. Success has been achieved as a result of top-level intervention and innovative acquisition approaches. For example, in 2003, the Office of the Secretary of Defense had to intervene to keep the Unmanned Combat Air Vehicle program viable. As UAV programs grow in the future, they will face challenges in the form of increased funding competition, greater demand for capabilities, and spectrum and airspace limitations. Moreover, UAVs are no longer an additional "nice-to-have" capability; they are becoming essential to the services' ability to conduct modern warfare. Meeting these challenges will require continued strong leadership, building on the UAV Roadmap and Planning Task Force as GAO has recommended. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since the publication of a 1957 report by the National Academy of Sciences, a geologic repository has been considered the safest and most secure method of isolating spent nuclear fuel and other types of nuclear waste from humans and the environment. During the 1950s and 1960s, managing spent nuclear fuel received relatively little attention from policymakers. The early regulators and developers of nuclear power viewed spent fuel disposal primarily as a technical problem that could be solved when necessary by application of existing technology. Attempts were made to reprocess the spent nuclear fuel, but they were not successful because of economic issues and concerns that reprocessed nuclear materials raised proliferation risks. The Atomic Energy Commission, a predecessor to DOE, attempted to develop high-level waste repositories in Kansas and New Mexico in the late 1960s and early 1970s, but neither succeeded because of local community and state opposition. NWPA established the disposal of spent nuclear fuel and high-level nuclear waste as a federal responsibility. Briefly, NWPA provided for the development of two geologic repositories and directed the Secretary of Energy to recommend three candidate sites and conduct studies to characterize each site. This same process was to be used for a second set of sites for the second repository. Table 1 summarizes some of the key decisions and events just prior to and as a result of NWPA. In the Secretary of Energy’s February 2002 recommendation to the President that Yucca Mountain be developed as the site for an underground repository for spent fuel and other radioactive wastes, the Secretary described the three criteria to make the determination that Yucca Mountain was the appropriate site. Specifically: Is Yucca Mountain a scientifically and technically suitable site for a repository? Are there compelling national interests that favor proceeding with the decision to site a repository there? Are there countervailing considerations that would outweigh those interests? The Secretary also described the steps DOE had taken to inform residents and others. Specifically, DOE held meetings in the vicinity of the prospective site to inform the residents of the site’s consideration as a repository and receive their comments, as directed by NWPA. The Secretary added that DOE went beyond NWPA’s requirements for providing notice and information prior to the selection of Yucca Mountain. He concluded that the Yucca Mountain site was qualified as the site for the repository and accordingly recommended the site to the President. Since the Secretary’s recommendation was made, the nation’s inventory of commercial spent nuclear fuel has continued to grow. The nation currently has about 70,000 metric tons of commercial spent nuclear fuel stored at 75 sites in 33 states (see fig. 1). This inventory is expected to more than double by 2055—assuming that the nation’s current reactors continue to produce spent nuclear fuel at the same rate and that no new reactors are brought online, and that some decline in the generation of spent fuel takes place as reactors are retired. Although some elements of spent nuclear fuel cool and decay quickly, becoming less dangerous, others remain dangerous to human health and the environment for tens of thousands of years. Most commercial spent nuclear fuel is stored at operating reactor sites; it is immersed in pools of water designed to cool and isolate it from the environment. Without a nuclear waste repository to move the spent nuclear fuel to, the racks in the pools holding spent fuel have been rearranged to allow for more dense storage of the spent fuel. Even with this rearrangement, spent nuclear fuel pools are reaching their capacities. As reactor operators have run out of space in their spent nuclear fuel pools, they have turned increasingly to dry cask storage systems that generally consist of stainless steel canisters placed inside larger stainless steel or concrete casks. A dry storage facility typically consists of security and safety mechanisms, such as a defensive perimeter with intrusion detection devices and radiation monitors surrounding a concrete pad with the dry storage casks emplaced on it. Regulatory requirements for radiation exposure for this type of facility are significantly different from those of a repository. For example, spent fuel need only be stored safely for the life of the storage facility, currently 40 years, which is in contrast to the 1 million year period for which safe storage must be demonstrated under the Environmental Protection Agency regulation promulgated for the Yucca Mountain repository. In August 2012, we reported that reactors at nine sites have been retired and that seven of these sites have completely removed spent fuel from their pools, as well as removing all infrastructure except that needed to safeguard the spent fuel. Since then, an eighth site has also emptied its pool, and is in the process of removing associated infrastructure. These sites serve no other purpose than to continue storing this spent fuel. As additional reactors retire, reactor operators will likely move all their spent nuclear fuel to dry storage and remove all other structures. We reported in November 2009 that experts we spoke with stated that dry cask storage systems are expected to be able to safely store spent nuclear fuel for at least 100 years. The experts said that, if these systems degrade over time, the spent nuclear fuel may have to be repackaged, which could require construction of new spent nuclear fuel pools or other structures to safely transfer the spent nuclear fuel to new storage systems. In addition, the experts said that spent fuel in centralized interim storage could present future security risks because, as spent fuel cools, it loses some of its self-protective qualities, potentially making it a more attractive target for sabotage or theft. NWPA also authorized DOE to contract with commercial nuclear reactor operators to take custody of their spent nuclear fuel for disposal at the repository beginning in January 1998. Ultimately, DOE was unable to meet this 1998 date. As we reported in August 2012, because DOE did not take custody of the spent fuel starting in 1998, as required under NWPA, DOE reported that, as of September 2011, 76 lawsuits had been filed against it by utilities to recover claimed damages resulting from the delay. In August 2012, we reported that these lawsuits have resulted in a cost to taxpayers of about $1.6 billion from the U.S. Treasury’s judgment fund. We also reported that DOE estimated that future liabilities would In November 2012, total about an additional $21 billion through 2020.DOE reported that the cost to taxpayers is now $2.6 billion and that future liabilities are now approximately $19.7 billion for a total of about $22.3 billion. DOE has also estimated that future liabilities may cost about $500 million each year after 2020. In November 2009, we reported on the attributes and challenges of a Yucca Mountain repository. We reported that DOE had spent billions of dollars for design, engineering, and testing activities for the Yucca Mountain site and had submitted a license to the Nuclear Regulatory Commission. If the repository had been built as planned, we stated that it would have provided a permanent solution for the nation’s nuclear waste, including commercial nuclear fuel, and would have minimized the uncertainty of future waste safety. Based on a review of key documents and interviews with DOE, Nuclear Regulatory Commission, and numerous other officials, we also reported in November 2009 that the construction of a repository at Yucca Mountain could have allowed the government to begin taking possession of the nuclear waste in about 10 to 30 years. DOE had reported in July 2008 that its best achievable date for opening the repository, if it had received Nuclear Regulatory Commission approval, would have been 2020. If the Yucca Mountain repository was completed and operational sooner than one or more temporary storage facilities or an alternative repository, it could have helped address the federal liabilities resulting from industry lawsuits related to continued storage of spent nuclear fuel at reactor sites. We also reported in August 2012 that states and community groups had raised concerns that the Nuclear Regulatory Commission was extending the licenses of current reactors or approving licenses for new reactors without a long-term solution for the disposition of spent nuclear fuel. If Yucca Mountain was licensed and constructed and began accepting spent nuclear fuel for disposal by 2027, which was the earliest likely opening date we estimated in our August 2012 report, some of these concerns could have been addressed. GAO-11-229. face challenges in reconstituting its work force. According to DOE, contractor, and former DOE officials we spoke with, it could take years for DOE to assemble the right mix of experts to restart work on the license application. When DOE terminated its licensing efforts, many of the federal and contractor staff working on the program retired or moved on to other jobs. Third, project funding could continue to be a challenge. As we reported, DOE’s budget for the Yucca Mountain repository program was not predictable because annual appropriations varied by as much as 20 percent from year to year. We recommended that Congress consider a more predictable funding mechanism for the project, which the Blue Ribbon Commission also recommended in its January 2012 report. We reported in November 2009 on several positive attributes of centralized interim storage—a near-term temporary storage alternative for managing the spent fuel that has accumulated and will continue to accumulate. First, centralized interim storage could allow DOE to consolidate the nation’s nuclear waste after reactors are decommissioned, thereby decreasing the complexity of securing and overseeing the waste located at reactor sites around the nation and increasing the efficiency of waste storage operations. Second, by moving spent nuclear fuel from decommissioned reactor sites to DOE’s centralized interim storage facility and taking custody of the spent fuel, DOE would begin to address the taxpayer financial liabilities stemming from industry lawsuits. Third, centralized interim storage could prevent utilities from having to build additional dry storage to store nuclear waste at operating reactor sites. Fourth, centralized interim storage could also provide the nation with some flexibility to consider alternative policies or new technologies by giving more time to consider alternatives and implement them. For example, centralized interim storage would keep spent fuel in a safe, easily accessible configuration for future recycling, if the nation decided to pursue recycling as a management option in the future. However, centralized interim storage also presents challenges. First, as we reported in November 2009 and August 2012, a key challenge confronting centralized interim storage is the uncertainty of DOE’s statutory authority to provide centralized storage. Provisions in NWPA that allow DOE to arrange for centralized storage have either expired or are unusable because they are tied to milestones in repository development that have not been met. It is not clear what other authority DOE or an independent entity might use for providing centralized interim storage of spent nuclear fuel. A second, equally important, challenge is the likelihood of opposition during site selection for a centralized interim storage facility. As we reported in November 2009, even if a community might be willing to host such a facility, finding a state that would be willing to host it could be extremely challenging, particularly since some states have voiced concerns that a centralized interim facility could become a de facto permanent disposal site. In 2011, the Western Governors Association passed a resolution stating that no centralized interim storage facility for spent nuclear fuel can be established in a western state without the expressed written consent of the governors. Third, centralized interim storage may also present transportation challenges. As we reported in August 2012, it is likely that the spent fuel would have to be transported twice—once to the centralized interim storage site and once to a permanent disposal site. The total distance over which the spent fuel would have to be transported would likely be greater than with other alternatives. The Nuclear Energy Institute has reported that of all the spent fuel currently in dry storage, only about 30 percent is directly transportable because of its current heat load, particularly since the nuclear industry packaged some spent nuclear fuel in dry storage containers to maximize storage capacity. We also reported in August 2012 that officials from a state regional organization that we spoke with said that transportation planning could be a complex endeavor, potentially taking 10 years to reach agreement on transportation routes and safety and security procedures. Fourth, although DOE had previously estimated that it could site, license, construct, and begin operations of a centralized interim storage facility within 6 years, it could take considerably longer depending on how long it takes to find a willing state and community, as well as license and construct the facility. Finally, as we reported in November 2009, developing centralized interim storage would not ultimately preclude the need for final disposal of the spent nuclear fuel. As we reported in November 2009, siting, licensing, and developing a permanent repository at a location other than Yucca Mountain could provide the opportunity to find a location that might achieve broader acceptance than the Yucca Mountain repository program. If a more widely accepted approach or site is identified, it carries the potential for avoiding costly delays experienced by the Yucca Mountain repository program. In addition, a new approach that involves a new entity for spent fuel management, as we concluded in our April 2011 report and the Blue Ribbon Commission recommended in January 2012, could add to transparency and consensus building. However, there are also key challenges to developing an alternative repository. First, as we reported in April 2011, developing a repository other than Yucca Mountain will restart the likely time-consuming and costly process of siting, licensing, and developing a repository. We reported that DOE had spent nearly $15 billion on the Yucca Mountain It is not yet clear how much it will ultimately cost to begin the project.process again and develop a repository at another location. Moreover, it is uncertain what legislative changes might be needed, if any, in part because the Nuclear Waste Policy Act, as amended, directs DOE to terminate all site specific activities at candidate sites other than Yucca Mountain. Second, it is unclear whether the Nuclear Waste Fund will be sufficient to fund a repository at another site. The fund was established under NWPA to pay industry’s share of the cost for the Yucca Mountain repository and was funded by a fee of one-tenth of a cent per kilowatt- hour of nuclear-generated electricity. The fund paid about 65 percent, or about $9.5 billion, of the expenditure for Yucca Mountain. According to DOE’s fiscal year 2012 financial report, the Nuclear Waste Fund currently has about $29 billion and grows by over $1 billion each year from accumulated fees and interest. However, utilities only pay into the fund for as long as their reactors are operating, and it is not clear how much longer reactor operators will be paying into the fund. For example, two utilities have announced plans—one in 2010 and the other in 2013—to shut down two reactor sites prior to their license expiration. As reactors are retired, they will need to be replaced by new reactors paying into the fund, or according to DOE officials, the fund might be drawn down faster than it can be replenished when developing a new repository. When more comprehensive information becomes available both about the process that DOE, or another agency, will be using to select a site and possible locations for a permanent repository, additional positive attributes as well as challenges may also come to light. Chairman Frelinghuysen, Ranking Member Kaptur, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Janet Frisch, Assistant Director, and Kevin Bray, Robert Sánchez, and Kiki Theodoropoulos made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Spent nuclear fuel, the used fuel removed from commercial nuclear power reactors, is one of the most hazardous substances created by humans. Commercial reactors have generated nearly 70,000 metric tons of spent fuel, which is currently stored at 75 reactor sites in 33 states, and this inventory is expected to more than double by 2055. The Nuclear Waste Policy Act of 1982, as amended, directs DOE to investigate the Yucca Mountain site in Nevada--100 miles northwest of Las Vegas--to determine if the site is suitable for a permanent repository for this and other nuclear waste. DOE submitted a license application for the Yucca Mountain site to the Nuclear Regulatory Commission in 2008, but in 2010 DOE suspended its licensing efforts and instead established a blue ribbon commission to study other options. The commission issued a report in January 2012 recommending a new strategy for managing nuclear waste, and DOE issued a new nuclear waste disposal strategy in 2013. This testimony is primarily based on prior work GAO issued from November 2009 to August 2012 and updated with information from DOE. It discusses the key attributes and challenges of options that have been considered for storage or disposal of spent nuclear fuel. GAO is making no new recommendations at this time. In November 2009, GAO reported on the attributes and challenges of a Yucca Mountain repository. A key attribute identified was that the Department of Energy (DOE) had spent significant resources to carry out design, engineering, and testing activities on the Yucca Mountain site and had completed a license application and submitted it to the Nuclear Regulatory Commission, which has regulatory authority over the construction, operation, and closure of a repository. If the repository had been built as planned, GAO concluded that it would have provided a permanent solution for the nation's commercial nuclear fuel and other nuclear waste and minimized the uncertainty of future waste safety. Constructing the repository also could have helped address issues including federal liabilities resulting from industry lawsuits against DOE related to continued storage of spent nuclear fuel at reactor sites. However, not having the support of the administration and the state of Nevada proved a key challenge. As GAO reported in April 2011, DOE officials did not cite technical or safety issues with the Yucca Mountain repository project when the project's termination was announced but instead stated that other solutions could achieve broader support. Temporarily storing spent fuel in a central location offers several positive attributes, as well as challenges, as GAO reported in November 2009 and August 2012. Positive attributes include allowing DOE to consolidate the nation's nuclear waste after reactors are decommissioned. Consolidation would decrease the complexity of securing and overseeing the waste located at reactor sites around the nation and would allow DOE to begin to address the taxpayer financial liabilities stemming from industry lawsuits. Interim storage could also provide the nation with some flexibility to consider alternative policies or new technologies. However, interim storage faces several challenges. First, DOE's statutory authority to develop interim storage is uncertain. Provisions in the Nuclear Waste Policy Act of 1982, as amended, that allow DOE to arrange for centralized interim storage have either expired or are unusable because they are tied to milestones in repository development that have not been met. Second, siting an interim storage facility could prove difficult. Even if a community might be willing to host a centralized interim storage facility, finding a state that would be willing to host such a facility could be challenging, particularly since some states have voiced concerns that an interim facility could become a de facto permanent disposal site. Third, interim storage may also present transportation challenges since it is likely that the spent fuel would have to be transported twice--once to the interim storage site and once to a permanent disposal site. Finally, developing centralized interim storage would not ultimately preclude the need for a permanent repository for spent nuclear fuel. Siting, licensing, and developing a permanent repository at a location other than Yucca Mountain could provide the opportunity to find a location that might achieve broader acceptance, as GAO reported in November 2009 and August 2012, and could help avoid costly delays experienced by the Yucca Mountain repository program. However, developing an alternative repository would restart the likely costly and time-consuming process of developing a repository. It is also unclear whether the Nuclear Waste Fund--established under the Nuclear Waste Policy Act of 1982, as amended, to pay industry's share of the cost for the Yucca Mountain repository--will be sufficient to fund a repository at another site. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
A long-standing problem in DOD space acquisitions is that program and unit costs tend to go up significantly from initial cost estimates, while in some cases the capability that was to be delivered goes down. Figure 1 compares original cost estimates and current cost estimates for the broader portfolio of major space acquisitions for fiscal years 2010 through 2015. The wider the gap between original and current estimates, the fewer dollars DOD has available to invest in new programs. As shown in the figure, cumulative estimated costs for the major space acquisition programs have increased by about $13.9 billion from initial estimates for fiscal years 2010 through 2015, almost a 286 percent increase. The declining investment in the later years is the result of mature programs that have planned lower out-year funding, cancellation of several development efforts, and the exclusion of space acquisition efforts for which total cost data were unavailable (such as new investments). When space system investments other than established acquisition programs of record—such as the Defense Weather Satellite System (DWSS) and Space Fence programs—are also considered, DOD’s space acquisition investments remain significant through fiscal year 2016, as shown in figure 2. Although estimated costs for selected space acquisition programs decrease 21 percent between fiscal years 2010 and 2015, they start to increase in fiscal year 2016. And, according to current DOD estimates, costs for two programs— Advanced Extremely High Frequency (AEHF) and Space Based Infrared System (SBIRS) High—are expected to significantly increase in fiscal years 2017 and 2018. The costs are associated with the procurement of additional blocks of satellites and are not included in the figure because they have not yet been reported or quantified. Figures 3 and 4 reflect differences in total program and unit costs for satellites from the time the programs officially began to their most recent cost estimates. As figure 4 shows, in several cases, DOD has increased the number of satellites. The figures reflect total program cost estimates developed in fiscal year 2010. Several space acquisition programs are years behind schedule. Figure 5 highlights the additional estimated months needed for programs to launch their first satellites. These additional months represent time not anticipated at the programs’ start dates. Generally, the further schedules slip, the more DOD is at risk of not sustaining current capabilities. For example, delays in launching the first MUOS satellite have placed DOD’s ultra high frequency communications capabilities at risk of falling below the required availability level. DOD had long-standing difficulties on nearly every space acquisition program, struggling for years with cost and schedule growth, technical or design problems, as well as oversight and management weaknesses. However, to its credit, it continues to make progress on several of its high- risk space programs, and is expecting to deliver significant advances in capability as a result. The Missile Defense Agency’s (MDA) Space Tracking and Surveillance System (STSS) demonstration satellites were launched in September 2009. Additionally, DOD launched its first GPS IIF satellite in May 2010 and plans to launch the second IIF satellite in June 2011—later than planned, partially because of system-level problems identified during testing. It also launched the first AEHF satellite in August 2010—although it has not yet reached its final planned orbit because of an anomaly with the satellite’s propulsion system—and launched the Space Based Space Surveillance (SBSS) Block 10 satellite in September 2010. DOD is scheduled to launch a fourth Wideband Global SATCOM (WGS) satellite broadening communications capability available to warfighters—in late 2011, and a fifth WGS satellite in early 2012. The Evolved Expendable Launch Vehicle (EELV) program had its 41st consecutive successful operational launch in May of this year. One program that appears to have recently overcome remaining technical problems is the SBIRS High satellite program. The first of six geosynchronous earth-orbiting (GEO) satellites (two highly elliptical orbit sensors have already been launched) was launched in May 2011 and is expected to continue the missile warning mission with sensors that are more capable than the satellites currently on orbit. Total cost for the SBIRS High program is currently estimated at over $18 billion for six GEO satellites, representing a program unit cost of over $3 billion, about 233 percent more than the original unit cost estimate. Additionally, the launch of the first GEO satellite represents a delay of approximately 9 years. The reasons for the delay include poor government oversight of the contractor, unanticipated technical complexities, and rework. The program office is working to rebaseline the SBIRS High contract cost and schedule estimates for the sixth time. Because of the problems on SBIRS High, in 2007, DOD began a follow-on system effort, which was known as Third Generation Infrared Surveillance (3GIRS), to run in parallel with the SBIRS High program. DOD canceled the 3GIRS effort in fiscal year 2011, but plans to continue providing funds under the SBIRS High program for one of the 3GIRS infrared demonstrations. While DOD is having success in readying some satellites for launch, other space acquisition programs face challenges that could further increase cost and delay delivery targets. The programs that may be susceptible to cost and schedule challenges include MUOS and the GPS IIIA program. Delays in the MUOS program have resulted in critical potential capability gaps for military and other government users. The GPS IIIA program was planned with an eye toward avoiding problems that plagued the GPS IIF program and it incorporated many of the best practices recommended by GAO, but the schedule leaves little room for potential problems and there is a risk that the ground system needed to operate the satellites will not be ready when the first satellite is launched. Additionally, the National Polar- orbiting Operational Environmental Satellite System (NPOESS) was restructured as a result of poor program performance and cost overruns, which caused schedule delays. These delays have resulted in a potential capability gap for weather and environmental monitoring. Furthermore, new space system acquisition efforts getting underway—including the Air Force’s Joint Space Operations Center Mission System (JMS) and Space Fence, and MDA’s Precision Tracking and Surveillance System (PTSS)— face potential development challenges and risks, but it is too early to tell how significant they may be to meeting cost, schedule, and performance goals. Table 1 describes the status of these efforts in more detail. Over the past year, we have completed reviews of sustaining and upgrading GPS capabilities and commercializing space technologies under the Small Business Innovation Research program (SBIR), and we have ongoing reviews of (1) DOD space situational awareness (SSA) acquisition efforts, (2) parts quality for DOD, MDA, and the National Aeronautics and Space Administration (NASA), and (3) a new acquisition strategy being developed for the EELV program. These reviews, discussed further below, underscore the varied challenges that still face the DOD space community as it seeks to complete problematic legacy efforts and deliver modernized capabilities. Our reviews of GPS and space situational awareness, for instance, have highlighted the need for more focused coordination and leadership for space activities that touch a wide range of government, international, and industry stakeholders; while our review of the SBIR program highlighted the substantial barriers and challenges small business must overcome to gain entry into the government space arena. GPS. We found that the GPS IIIA schedule remains ambitious and could be affected by risks such as the program’s dependence on a ground system that will not be completed until after the first IIIA launch. We found that the GPS constellation availability had improved, but in the longer term, a delay in the launch of the GPS IIIA satellites could still reduce the size of the constellation to fewer than 24 operational satellites—the number that the U.S. government commits to—which might not meet the needs of some GPS users. We also found that the multiyear delays in the development of GPS ground control systems were extensive. Although the Air Force had taken steps to enable quicker procurement of military GPS user equipment, there were significant challenges to its implementation. This has had a significant impact on DOD as all three GPS segments—space, ground control, and user equipment—must be in place to take advantage of new capabilities. Additionally, we found that DOD had taken some steps to better coordinate all GPS segments, including laying out criteria and establishing visibility over a spectrum of procurement efforts, but it did not go as far as we recommended in 2009 in terms of establishing a single authority responsible for ensuring that all GPS segments are synchronized to the maximum extent practicable. Such an authority is warranted given the extent of delays, problems with synchronizing all GPS segments, and importance of new capabilities to military operations. As a result, we reiterated the need to implement our prior recommendation. Small Business Innovation Research (SBIR). In response to a request from this subcommittee, we found that while DOD is working to commercialize space-related technologies under its SBIR program by transitioning these technologies into acquisition programs or the commercial sector, it has limited insight into the program’s effectiveness. Specifically, DOD has invested about 11 percent of its fiscal years 2005–2009 research and development funds through its SBIR program to address space-related technology needs. Additionally, DOD is soliciting more space-related research proposals from small businesses. Further, DOD has implemented a variety of programs and initiatives to increase the commercialization of SBIR technologies and has identified instances where it has transitioned space-related technologies into acquisition programs or the commercial sector. However, DOD lacks complete commercialization data to determine the effectiveness of the program in transitioning space-related technologies into acquisition programs or the commercial sector. Of the nearly 500 space-related contracts awarded in fiscal years 2005 through 2009, DOD officials could not, for various reasons, identify the total number of technologies that transitioned into acquisition programs or the commercial sector. Further, there are challenges to executing the SBIR program that DOD officials acknowledge and are planning to address, such as the lack of overarching guidance for managing the DOD SBIR program. Under this review, most stakeholders we spoke with—DOD, prime contractors, and small business officials—generally agreed that small businesses participating in the DOD SBIR program face difficulties transitioning their space-related technologies into acquisition programs or the commercial sector. Although we did not assess the validity of the concerns cited, stakeholders we spoke with identified challenges inherent to developing space technologies; challenges because of the SBIR program’s administration, timing, and funding issues; and other challenges related to participating in the DOD space system acquisitions environment. For example, some small-business officials said that working in the space community is challenging because the technologies often require more expensive materials and testing than other technologies. They also mentioned that delayed contract awards and slow contract disbursements have caused financial hardships. Additionally, several small businesses cited concerns with safeguarding their intellectual property. Space Situational Awareness (SSA). We have found that while DOD has significantly increased its investment and planned investment in SSA acquisition efforts in recent years to address growing SSA capability shortfalls, most efforts designed to meet these shortfalls have struggled with cost, schedule, and performance challenges and are rooted in systemic problems that most space system acquisition programs have encountered over the past decade. Consequently, in the past 5 fiscal years, DOD has not delivered significant new SSA capabilities as originally expected. Capabilities that were delivered served to sustain or modernize existing systems versus closing capability gaps. To its credit, last fall the Air Force launched a space- based sensor that is expected to appreciably enhance SSA. However, two critical acquisition efforts that are scheduled to begin development within the next 2 years—Space Fence and JMS—face development challenges and risks, such as the use of immature technologies and planning to deliver all capabilities in a single, large increment versus smaller and more manageable increments. It is essential that these acquisitions are placed on a solid footing at the start of development to help ensure that their capabilities are delivered to the warfighter as and when promised. DOD plans to begin delivering other new capabilities in the coming 5 years, but it is too early to determine the extent to which these additions will address capability shortfalls. We have also found that there are significant inherent challenges to executing and overseeing the SSA mission, largely because of the sheer number of governmentwide organizations and assets involved in the mission. This finding is similar to what we have reported from other space system acquisition reviews over the years. Additionally, while the recently issued National Space Policy assigns SSA responsibility to the Secretary of Defense, the Secretary does not necessarily have the corresponding authority to execute this responsibility. However, actions, such as development of a national SSA architecture, are being taken that could help facilitate management and oversight governmentwide. The National Space Policy, which recognizes the importance of SSA, directs other positive steps, such as the determination of roles, missions, and responsibilities to manage national security space capabilities and the development of options for new measures for improving SSA capabilities. Furthermore, the recently issued National Security Space Strategy could help guide the implementation of the new space policy. We expect our report based on this review to be issued in June 2011. Parts quality for DOD, MDA, and NASA. Quality is paramount to the success of DOD space systems because of their complexity, the environment they operate in, and the high degree of accuracy and precision needed for their operations. Yet in recent years, many programs have encountered difficulties with quality workmanship and parts. For example, DOD’s AEHF protected communications satellite has yet to reach its intended orbit because of a blockage in a propellant line. Also, MDA’s STSS program experienced a 15-month delay in the launch of demonstration satellites because of a faulty manufacturing process of a ground-to-spacecraft communication system part. Furthermore, NASA’s Mars Science Laboratory program experienced a 1-year delay in the development of the descent and cruise stage propulsion systems because of a welding process error. We plan to issue a report on the results of a review that focuses specifically on parts quality issues in June 2011. We are examining the extent to which parts quality problems are affecting DOD, MDA, and NASA space and missile defense programs; the causes of these problems; and initiatives to detect and prevent parts quality problems. EELV acquisition strategy. DOD spends billions of dollars on launch services and infrastructure through two families of commercially owned and operated vehicles under the EELV program. This investment allows the nation to launch its national security satellites that provide the military and intelligence community with advanced space-based capabilities. DOD is preparing to embark on a new acquisition strategy for the EELV program. Given the costs and importance of space launch activities, it is vital that this strategy maximize cost efficiencies while still maintaining a high degree of mission assurance and a healthy industrial base. We are currently reviewing activities leading up to the strategy and plan to issue a report on the results of this review in June 2011. In particular, we are examining whether DOD has the knowledge it needs to develop a new EELV acquisition strategy and the extent to which there are important factors that could affect launch acquisitions. DOD continues to work to ensure that its space programs are more executable and produce a better return on investment. Many of the actions it has been taking address root causes of problems, though it will take time to determine whether these actions are successful and they need to be complemented by decisions on how best to lead, organize, and support space activities. Our past work has identified a number of causes of the cost growth and related problems, but several consistently stand out. First, on a broad scale, DOD has tended to start more weapon programs than it can afford, creating a competition for funding that encourages low cost estimating, optimistic scheduling, overpromising, suppressing bad news, and for space programs, forsaking the opportunity to identify and assess potentially more executable alternatives. Programs focus on advocacy at the expense of realism and sound management. Invariably, with too many programs in its portfolio, DOD is forced to continually shift funds to and from programs—particularly as programs experience problems that require additional time and money to address. Such shifts, in turn, have had costly, reverberating effects. Second, DOD has tended to start its space programs too early, that is, before it has the assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. This tendency is caused largely by the funding process, since acquisition programs attract more dollars than efforts concentrating solely on proving technologies. Nevertheless, when DOD chooses to extend technology invention into acquisition, programs experience technical problems that require large amounts of time and money to fix. Moreover, when this approach is followed, cost estimators are not well positioned to develop accurate cost estimates because there are too many unknowns. Put more simply, there is no way to accurately estimate how long it would take to design, develop, and build a satellite system when critical technologies planned for that system are still in relatively early stages of discovery and invention. Third, programs have historically attempted to satisfy all requirements in a single step, regardless of the design challenges or the maturity of the technologies necessary to achieve the full capability. DOD has preferred to make fewer but heavier, larger, and more complex satellites that perform a multitude of missions rather than larger constellations of smaller, less complex satellites that gradually increase in sophistication. This has stretched technology challenges beyond current capabilities in some cases and vastly increased the complexities related to software. Programs also seek to maximize capability on individual satellites because it is expensive to launch them. Figure 6 illustrates the various factors that can break acquisitions. Many of these underlying issues affect the broader weapons portfolio as well, though we have reported that space programs are particularly affected by the wide disparity of users, including DOD, the intelligence community, other federal agencies, and in some cases, other countries, U.S. businesses, and citizens. Moreover, problematic implementation of an acquisition strategy in the 1990s, known as Total System Performance Responsibility, for space systems resulted in problems on a number of programs because it was implemented in a manner that enabled requirements creep and poor contractor performance—the effects of which space programs are finally overcoming. We have also reported on shortfalls in resources for testing new technologies, which, coupled with less expertise and fewer contractors available to lead development efforts, have magnified the challenge of developing complex and intricate space systems. Our work—which is largely based on best practices in the commercial sector—has recommended numerous actions that can be taken to address the problems we identified. Generally, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions to move to next phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that could benefit space programs. These practices are highlighted in appendix I. Over the past several years, DOD has implemented or has been implementing a number of actions to reform how space and weapon systems are acquired, both through its own initiatives as well as those required by statute. Additionally, DOD is evaluating and proposing new actions to increase space system acquisition efficiency and effectiveness. Because many of these actions are relatively new, or not yet fully implemented, it is too early to tell whether they will be effective or effectively implemented. For space in particular, DOD is working to ensure that critical technologies are matured before large-scale acquisition programs begin, requirements are defined early in the process and are stable throughout, and system design remains stable. DOD also intends to follow incremental or evolutionary acquisition processes versus pursuing significant leaps in capabilities involving technology risk and has done so with the only new major satellite program undertaken by the Air Force in recent years—GPS IIIA. DOD is also providing more program and contractor oversight and putting in place military standards and specifications in its acquisitions. Additionally, DOD and the Air Force are working to streamline management and oversight of the national security space enterprise. For example, all Air Force space system acquisition responsibility has been aligned to the office that has been responsible for all other Air Force acquisition efforts, and the Defense Space Council—created last year—is reviewing, as one of its first agenda items, options for streamlining the many committees, boards, and councils involved in space issues. These and other actions that have been taken or are being taken that could improve space system acquisition outcomes are described in table 2. At the DOD-wide level, and as we reported last year, Congress and DOD have recently taken major steps toward reforming the defense acquisition system in ways that may increase the likelihood that weapon programs will succeed in meeting planned cost and schedule objectives. In particular, new DOD policy and legislative provisions place greater emphasis on front-end planning and establishing sound business cases for starting programs. For example, the provisions require programs to invest more time and resources to refine concepts through practices such as early systems engineering, strengthen cost estimating, develop technologies, build prototypes, hold early milestone reviews, and develop preliminary designs before starting system development. These provisions are intended to enable programs to refine a weapon system concept and make cost, schedule, and performance trade-offs before significant commitments are made. In addition, DOD policy requires establishment of configuration steering boards that meet annually to review program requirements changes as well as to make recommendations on proposed descoping options that could reduce program costs or moderate requirements. Fundamentally, these provisions should help (1) programs replace risk with knowledge and (2) set up more executable programs. Key DOD and legislative provisions compared with factors we identified in programs that have been successful in meeting cost and schedule baselines are summarized in table 3. Furthermore, the Ike Skelton National Defense Authorization Act for Fiscal Year 2011, signed into law on January 7, 2011, contains further direction aimed at improving acquisition outcomes, including, among other things, a requirement for the Secretary of Defense to issue guidance on the use of manufacturing readiness levels (including specific levels that should be achieved at key milestones and decision points), elevating the role of combatant commanders in DOD’s requirements-setting process, and provisions for improving the acquisition workforce. While it is too soon to determine if Congress’s and DOD’s reform efforts will improve weapon program outcomes, DOD has taken steps to implement the provisions. For example, in December 2009, the department issued a new implementation policy, which identifies roles and responsibilities and institutionalizes many of the requirements of the Weapon Systems Acquisition Reform Act of 2009. DOD has also filled several key leadership positions created by the legislation, including the Directors for Cost Assessment and Program Evaluation, Developmental Test and Evaluation, Systems Engineering, and Performance Assessments and Root Cause Analyses. To increase oversight, the department embarked on a 5-year effort to increase the size of the acquisition workforce by up to 20,000 personnel by 2015. Furthermore, the department began applying the acquisition reform provisions to some new programs currently in the planning pipeline. For example, many of the pre- Milestone B programs we reviewed this year as part of our annual assessment of selected weapon programs planned to conduct preliminary design reviews before going to Milestone B, although fewer are taking other actions, such as developing prototypes, that could improve their chances of success. With respect to space system acquisitions, particularly GPS III—DOD’s newest major space system acquisition—has embraced the knowledge-based concepts behind our previous recommendations as a means of preventing large cost overruns and schedule delays. Additionally, the Office of the Secretary of Defense and the Air Force are proposing new acquisition strategies for satellites and launch vehicles: In June of last year, and as part of the Secretary of Defense’s Efficiencies Initiative, the Under Secretary of Defense for Acquisition, Technology and Logistics began an effort to restore affordability and productivity in defense spending. Major thrusts of this effort include targeting affordability and controlling cost growth, incentivizing productivity and innovation in industry, promoting real competition, improving tradecraft in services acquisition, and reducing nonproductive processes and bureaucracy. As part of this effort, the Office of the Secretary of Defense and the Air Force are proposing a new acquisition strategy for procuring satellites, called the Evolutionary Acquisition for Space Efficiency (EASE), to be implemented starting in fiscal year 2012. Primary elements of this strategy include block buys of two or more satellites (economic order quantities) using a multiyear procurement construct, use of fixed-price contracting, stable research and development investment, evolutionary development, and stable requirements. According to DOD, EASE is intended to help stabilize funding, staffing, and subtier suppliers; help ensure mission continuity; reduce the impacts associated with obsolescence and production breaks; and increase long-term affordability with cost savings of over 10 percent. DOD anticipates first applying the EASE strategy to procuring two AEHF satellites beginning in fiscal year 2012, followed by procurement of two SBIRS High satellites beginning in fiscal year 2013. According to the Air Force, it will consider applying the EASE strategy—once it is proven—to other space programs, such as GPS III. We have not yet conducted a review of the EASE strategy to assess the potential benefits, challenges, and risks of its implementation. Questions about this approach would include the following: What are the major risks incurred by the government in utilizing the EASE acquisition strategy? What level of risks (known unknowns and unknown unknowns) is being assumed in the estimates of savings to be accrued from the EASE strategy? How are evolutionary upgrades to capabilities to be pursued under EASE? How does the EASE acquisition strategy reconcile with the current federal and DOD acquisition policy, acquisition and financial management regulations, and law? The Air Force is developing a new acquisition strategy for its EELV program. Primarily, under the new strategy, the Air Force and National Reconnaissance Office are expected to initiate block buys of eight first stage booster cores—four for each EELV family, Atlas V and Delta IV—per year over 5 years to help stabilize the industrial base, maintain mission assurance, and avoid cost increases. As mentioned earlier, we have initiated a review of the development of the new strategy and plan to issue a report on our findings in June 2011. Given concerns raised through recent studies about visibility into costs and the industrial base supporting EELV, it is important that this strategy be supported with reliable and accurate data. The actions that the Office of the Secretary of Defense and the Air Force have been taking to address acquisition problems listed in tables 2 and 3 are good steps. However, more changes to processes, policies, and support may be needed—along with sustained leadership and attention—to help ensure that these reforms can take hold, including addressing the diffuse leadership for space programs. Diffuse leadership has had a direct impact on the space system acquisition process, primarily because it has made it difficult to hold any one person or organization accountable for balancing needs against wants, for resolving conflicts among the many organizations involved with space, and for ensuring that resources are dedicated where they need to be dedicated. This has hampered DOD’s ability to synchronize delivery of space, ground, and user assets for space programs. For instance, many of the cost and schedule problems we identified on the GPS program were tied in part to diffuse leadership and organizational stovepipes throughout DOD, particularly with respect to DOD’s ability coordinate delivery of space, ground, and user assets. Additionally, we have recently reported that DOD faces a situation where satellites with advances in capability will be residing for years in space without users being able to take full advantage of them because investments and planning for ground, user, and space components were not well coordinated. Specifically, we found that the primary cause for user terminals not being well synchronized with their associated space systems is that user terminal development programs are typically managed by different military acquisition organizations than those managing the satellites and ground control systems. Recent studies and reviews examining the leadership, organization, and management of national security space have found that there is no single authority responsible below the President and that authorities and responsibilities are spread across the department. In fact, the national security space enterprise comprises a wide range of government and nongovernment organizations responsible for providing and operating space-based capabilities serving both military and intelligence needs. While some changes to the leadership structure have recently been made—including revalidating the role of the Secretary of the Air Force as the DOD Executive Agent for Space, disestablishing the Office of the Assistant Secretary of Defense for Networks and Information Integration and the National Security Space Office, and aligning Air Force space system acquisition responsibility into a single Air Force acquisition office—and others are being studied, it is too early to tell how effective these changes will be in streamlining management and oversight of space system acquisitions. Additionally, while the recently issued National Space Policy assigns responsibilities for governmentwide space capabilities, such as those for SSA, it does not necessarily assign the corresponding authority to execute the responsibilities. Finally, adequate workforce capacity is essential for the front-end planning activities now required by acquisition reform initiatives for new weapon programs to be successful. However, studies have identified insufficient numbers of experienced space system acquisition personnel and inadequate continuity of personnel in project management positions as problems needing to be addressed in the space community. For example, a recent Secretary of the Air Force-directed Broad Area Review of space launch noted that while the Air Force Space and Missile Systems Center workforce had decreased by about 25 percent in the period from 1992 to 2010, the number of acquisition programs had increased by about 41 percent in the same time period. Additionally, our own studies have identified gaps in key technical positions, which we believed increased acquisition risks. For instance, in a 2008 review of the EELV program, we found that personnel shortages in the EELV program office occurred particularly in highly specialized areas. According to the EELV program office and Broad Area Review, this challenge persists. DOD is working to position itself to improve its space system acquisitions. After more than a decade of acquisition difficulties—which have created potential gaps in capability, diminished DOD’s ability to invest in new space systems, and lessened DOD’s credibility to deliver high-performing systems within budget and on time—DOD is starting to launch new generations of satellites that promise vast enhancements in capability. In 1 year, DOD has or expects to have launched newer generations of navigation, communications, SSA, and missile warning satellites. Moreover, given the nation’s fiscal challenges, DOD’s focus on fixing problems and implementing reforms rather than taking on new, complex, and potentially higher-risk efforts is promising. However, challenges to keeping space system acquisitions on track remain, including pursuing evolutionary acquisitions over revolutionary ones, managing requirements, providing effective coordination across the diverse organizations interested in space-based capabilities, and ensuring that technical and programmatic expertise are in place to support acquisitions. DOD’s newest major space system acquisition efforts, such as GPS IIIA, DWSS, JMS, Space Fence, and the follow-on to the SBSS will be key tests of how well DOD’s reforms and reorganizations have positioned it to manage these challenges. We look forward to working with DOD to help ensure that these and other challenges are addressed. Chairman Nelson, Ranking Member Sessions, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this statement, please contact Cristina Chaplain at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Pubic Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Art Gallegos, Assistant Director; Kristine Hassinger; Arturo Holguín; Rich Horiuchi; Roxanna Sun; and Bob Swierczek. Prioritize investments so that projects can be fully funded and it is clear where projects stand in relation to the overall portfolio. Follow an evolutionary path toward meeting mission needs rather than attempting to satisfy all needs in a single step. Match requirements to resources—that is, time, money, technology, and people—before undertaking a new development effort. Research and define requirements before programs are started and limit changes after they are started. Ensure that cost estimates are complete, accurate, and updated regularly. Commit to fully fund projects before they begin. Ensure that critical technologies are proven to work as intended before programs are started. Assign more ambitious technology development efforts to research departments until they are ready to be added to future generations (increments) of a product. Use systems engineering to close gaps between resources and requirements before launching the development process. Use quantifiable data and demonstrable knowledge to make go/no-go decisions, covering critical facets of the program such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Do not allow development to proceed until certain thresholds are met—for example, a high proportion of engineering drawings completed or production processes under statistical control. Empower program managers to make decisions on the direction of the program and to resolve problems and implement solutions. Hold program managers accountable for their choices. Require program managers to stay with a project to its end. Hold suppliers accountable to deliver high-quality parts for their products through such activities as regular supplier audits and performance evaluations of quality and delivery, among other things. Encourage program managers to share bad news, and encourage collaboration and communication. In preparing this testimony, we relied on our body of work in space programs, including previously issued GAO reports on assessments of individual space programs, common problems affecting space system acquisitions, and the Department of Defense’s (DOD) acquisition policies. We relied on our best practices studies, which comment on the persistent problems affecting space system acquisitions, the actions DOD has been taking to address these problems, and what remains to be done, as well as Office of the Secretary of Defense and Air Force documents addressing these problems and actions. We also relied on work performed in support of our annual weapons system assessments, and analyzed DOD funding estimates to assess cost increases and investment trends for selected major space system acquisition programs. The GAO work used in preparing this statement was conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Despite decades of significant investment, most of the Department of Defense's (DOD) large space acquisition programs have collectively experienced billions of dollars in cost increases, stretched schedules, and increased technical risks. Significant schedule delays of as much as 9 years have resulted in potential capability gaps in missile warning, military communications, and weather monitoring. These problems persist, with other space acquisition programs still facing challenges in meeting their targets and aligning the delivery of assets with appropriate ground and user systems. To address cost increases, DOD reduced the number of satellites it would buy, reduced satellite capabilities, or terminated major space system acquisitions. Broad actions have also been taken to prevent their occurrence in new programs, including better management of the acquisition process and oversight of its contractors and resolution of technical and other obstacles to DOD's ability to deliver capability. This testimony will focus on the (1) status of space system acquisitions, (2) results of GAO's space-related reviews over the past year and the challenges they signify, (3) efforts DOD has taken to address causes of problems and increase credibility and success in its space system acquisitions as well as efforts currently underway, and (4) what remains to be done. Over the past two decades, DOD has had difficulties with nearly every space acquisition program, with years of cost and schedule growth, technical and design problems, and oversight and management weaknesses. However, to its credit, DOD continues to make progress on several of its programs--such as the Space Based Infrared System High and Advanced Extremely High Frequency programs--and is expecting to deliver significant advances in capability as a result. But other programs continue to be susceptible to cost and schedule challenges. For example, the Global Positioning System (GPS) IIIA program's total cost has increased by about 10 percent over its original estimate, and delays in the Mobile User Objective System continue the risk of a capability gap in ultra high frequency satellite communications. In 2010, GAO assessed DOD's efforts to (1) upgrade and sustain GPS capabilities and (2) commercialize or incorporate into its space acquisition program the space technologies developed by small businesses. These reviews underscore the varied challenges that still face the DOD space community as it seeks to complete problematic legacy efforts and deliver modernized capabilities--for instance, the need for more focused coordination and leadership for space activities--and highlight the substantial barriers and challenges that small businesses must overcome to gain entry into the government space arena. DOD continues to work to ensure that its space programs are more executable and produce a better return on investment. Many of the actions it has been taking address root causes of problems, though it will take time to determine whether these actions are successful. For example, DOD is working to ensure that critical technologies are matured before large-scale acquisition programs begin and requirements are defined early in the process and are stable throughout. Additionally, DOD and the Air Force are working to streamline management and oversight of the national security space enterprise. While DOD actions to date have been good, more changes to processes, policies, and support may be needed--along with sustained leadership and attention--to help ensure that these reforms can take hold, including addressing the diffuse leadership for space programs. While some changes to the leadership structure have recently been made and others are being studied, it is too early to tell how effective they will be in streamlining management and oversight of space system acquisitions. Finally, while space system acquisition workforce capacity is essential if new weapon programs are to be successful, DOD continues to face gaps in technical and programmatic expertise for space. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
EPA has taken some actions but has not fully addressed the findings and recommendations of five independent evaluations over the past 20 years regarding long-standing planning, coordination, and leadership issues that hamper the quality, effectiveness, and efficiency of its science activities, including its laboratory operations. First, EPA has yet to fully address planning and coordination issues identified by a 1992 independent, expert panel evaluation that recommended that EPA develop and implement an overarching issue- based planning process that integrates and coordinates scientific efforts throughout the agency, including the important work of its 37 laboratories. That evaluation found that EPA’s science was of uneven quality and that the agency lacked a coherent science agenda and operational plan to guide scientific efforts throughout the agency. Because EPA did not implement the evaluation’s recommendation, EPA’s programs, regional officials, and ORD continue to independently plan and coordinate the activities of their respective laboratories based on their own offices’ priorities and needs. MITRE Corporation, Center for Environment, Resources, and Space, Assessment of the Scientific and Technical Laboratories and Facilities of the U.S. Environmental Protection Agency (McLean, Va., May 1994). study, an agencywide steering committee formed by EPA to consider restructuring and consolidation options issued a report to the Administrator in July 1994. The steering committee report stated that combining ORD laboratories at a single location could improve teamwork and raise productivity but concluded that, for the near term, ORD should be functionally reorganized but not physically consolidated. Regarding program office laboratory consolidations, the Office of Radiation and Indoor Air did not physically consolidate its laboratories but did administratively and physically consolidate its Las Vegas laboratory with ORD’s Las Vegas radiation laboratory, and the Office of Prevention, Pesticides, and Toxic Substances colocated three of four laboratories with the region 3 laboratory. As for the regional laboratories, the steering committee’s report endorsed the current decentralized regional model but did not provide a justification for its position. National Research Council, Interim Report of the Committee on Research and Peer Review in EPA (Washington, D.C., National Academies Press, 1995); Environmental Protection Agency, Office of Inspector General, Regional Laboratories (Washington, D.C., Aug. 20, 1997); and National Research Council, Strengthening Science at the U.S. Environmental Protection Agency: Research-Management and Peer Review Practices (Washington, D.C., National Academies Press, 2000). technical activities. To date, EPA has not requested authority to create a new position of deputy administrator for science and technology and continues to operate its laboratories under the direction of 15 different senior officials using 15 different organizational and management structures. As a result, EPA has a limited ability to know if scientific activities are being unintentionally duplicated among the laboratories or if opportunities exist to collaborate and share scientific expertise, equipment, and facilities across EPA’s organizational boundaries. On the basis of our analysis of EPA’s facility master planning process, we found that EPA manages its laboratory facilities on a site-by-site basis and does not evaluate each site in the context of all the agency’s real property holdings—as recommended by the National Research Council report in 2004. EPA’s facility master plans are intended to be the basis for justifying its building and facilities spending, which was $29.9 million in fiscal year 2010, and allocating those funds to specific repair and improvement projects. Master plans should contain, among other things, information on mission capabilities, use of space, and condition of individual laboratory sites. In addition, we found that most facility master plans were out of date. EPA’s real property asset management plan states that facility master plans are supposed to be updated every 5 years to reflect changes in facility condition and mission, but we found that 11 of 20 master plans were out of date and 2 of 20 had not been created yet. Because EPA makes capital improvement decisions on a site-by-site basis using master plans that are often outdated, it cannot be assured it is allocating its funds most appropriately. According to officials responsible for allocating capital improvement resources, they try to spread these funds across the agency’s offices and regions equitably but capital improvement funds have not kept pace with requests. The pressure and need to effectively share and allocate limited resources among EPA’s many laboratories were also noted in a 1994 National Academy of Public Administration report on EPA’s laboratory infrastructure, which found that EPA has “too many labs in too many locations often without sufficient resources to sustain a coherent stable program.” In addition, because decisions regarding laboratory facilities are made independently of one another, opportunities to improve operating efficiencies can be lost. Specifically, we found cases where laboratories that were previously colocated moved into separate space without considering the potential benefits of remaining colocated. In one case, we found that the relocation increased some operating costs because the laboratories then had two facility managers and two security contracts and associated personnel because of different requirements for the leased facility. In another case, when two laboratories that were previously colocated moved into separate new leased laboratories several miles apart, agency officials said that they did not know to what extent this move may have resulted in increased operating cost. EPA also does not have sufficiently complete and reliable data to make informed decisions for managing its facilities. Since 2003, when GAO first designated federal real property management as an area of high risk, agencies have come under increasing pressure to manage their real property assets more effectively. In February 2004, the President issued an executive order directing agencies to, among other things, improve the operational and financial management of their real property inventory. The order established a Federal Real Property Council within the Office of Management and Budget (OMB), which has developed guiding principles for real property asset management. In response to a June 2010 presidential memorandum directing agencies to accelerate efforts to identify and eliminate excess properties, in July 2010 EPA reported to the OMB that it does not anticipate the disposal of any of its owned laboratories and major assets in the near future because these assets are fully used and considered critical for EPA’s mission.decisions regarding facility disposal are made using the Federal Real Property Council’s guidance but we found that EPA does not have the information needed to effectively implement this guidance. Specifically, EPA does not have accurate, reliable information regarding (1) the need for facilities, (2) property usage, (3) facility condition, and (4) facility operating efficiency—thereby undermining the credibility of any decisions based on this approach. First, EPA does not maintain accurate data to determine if there is an agency need for laboratory facilities because many facility master plans are often out of date. According to EPA’s asset management plan, the master plans are tools that communicate the link between mission priorities and facilities. However, without up-to-date master plans, EPA does not have accurate data to determine if laboratory facilities are needed for its mission. Second, the agency does not have accurate data on space needs and usage because many facility master plans containing space utilization analyses are out of date. EPA also does not use public and commercial space usage benchmarks—as recommended by the Federal Real Property Council—to calculate usage rates for its laboratories. Instead, EPA measures laboratory usage on the basis of interviews with local laboratory officials. According to EPA officials, they do not use benchmarks because the work of the laboratories varies. In 2008, however, an EPA contractor created a laboratory benchmark based on those used by comparable facilities at the Centers for Disease Control and Prevention, the National Institutes of Health, the Department of Energy, and several research universities to evaluate space at two ORD laboratories in North Carolina. Consequently, we believe that objective benchmarks can be developed for EPA’s unique laboratory requirements. In addition, the contractor’s analysis concluded that EPA could save $1.68 million in annual leasing and $800,000 in annual energy costs through consolidation of the two ORD laboratories. Agency officials told us they hope to consolidate the laboratories in fiscal year 2012 if funds are available. Third, the agency does not have accurate data for assessing facilities’ condition because condition assessments contained in facility master plans are often outdated. The data may also be unreliable because data entered by local facility managers are not verified, according to agency officials. Such verification could involve edit checks or controls to help ensure the data are entered accurately. Fourth, EPA does not have reliable operating cost data for its laboratory enterprise, because the agency’s financial management system does not track operating costs in sufficient detail to break out information for individual laboratories or for the laboratory enterprise as a whole. Reliable operating cost data are important in determining whether a laboratory facility is operating efficiently, a determination that should inform both capital investment and property disposal decisions. EPA does not use a comprehensive planning process for managing its laboratories’ workforce. For example, we found that not all of the regional and program offices with laboratories prepared workforce plans as part of an agencywide planning effort in 2007, and for those that did, most did not specifically address their laboratories’ workforce. In fact, some regional management and human resource officials we spoke with were unaware of the requirement to submit workforce plans to the Office of Human Resources. Some of these managers told us the program and regional workforce plans were a paperwork exercise, irrelevant to the way the workforce is actually managed. Managers in program and regional offices said that workforce planning for their respective laboratories is fundamentally driven by the annual budgets of program and regional offices and ceilings for full-time equivalents (FTE). In addition, none of the program and regional workforce plans we reviewed described any effort to work across organizational boundaries to integrate or coordinate their workforce with the workforces of other EPA laboratories. For example, although two regional workforce plans discussed potential vulnerability if highly skilled laboratory personnel retired, neither plan explored options for sharing resources across regional boundaries to address potential skill gaps. According to EPA’s Regional Laboratory System 2009 Annual Report, many of the regional laboratories provide the same or similar core analytical capabilities— including a full range of routine and specialized chemical and biological testing of air, water, soil, sediment, tissue, and hazardous waste. Nonetheless, in these workforce plans, each region independently determines and attempts to address its individual workforce needs. As a result, by not exploring options for sharing resources among the ORD, program, and regional boundaries to address potential skill gaps, EPA may be missing opportunities to fill critical occupation needs through resource sharing. Moreover, EPA does not have basic demographic information on the number of federal and contract employees currently working in its 37 laboratories. Specifically, EPA does not routinely compile the information needed to know how many scientific and technical employees it has working in its laboratories, where they are located, what functions they perform, or what specialized skills they may have. In addition, the agency does not have a workload analysis for the laboratories to help determine the optimal numbers and distribution of staff throughout the enterprise. We believe that such information is essential for EPA to prepare a comprehensive laboratory workforce plan to achieve the agency’s mission with limited resources. Because EPA’s laboratory workforce is managed separately by 15 independent senior officials, information about that workforce is tracked separately and is not readily available or routinely compiled or evaluated. Instead, EPA has relied on ad hoc calls for information to compile such data. In response to our prior reports on EPA’s workforce strategy and the work of the EPA Inspector General, EPA hired a contractor in 2009, in part to conduct a study to provide information about the agency’s overall workload, including staffing levels and workload shifts for six major functions, including scientific research. In its budget justification for fiscal year 2012, however, the agency reported to Congress that a survey of the existing workload information provided by the contractor will not immediately provide information sufficient to determine whether changes are needed in workforce levels. As of October 2011, EPA had not released the results of this study, and we therefore cannot comment on whether its content has implications for the laboratories. The agency asked its National Advisory Council for Environmental Policy and Technology to help address scientific and technical competencies as it develops a new agencywide workforce plan. However, the new plan is not complete, and therefore it is too early to tell whether the council’s recommendations will have implications for the laboratories. Finally, in our July 2011 report on EPA’s laboratory enterprise we recommended, among other things, that EPA develop a coordinated planning process for its scientific activities and appoint a top-level official with authority over all the laboratories, improve physical and real property planning decisions, and develop a workforce planning process for all laboratories that reflects current and future needs of laboratory facilities. In written comments on the report, EPA generally agreed with our findings and recommendations. Chairman Harris, Ranking Member Miller, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the subcommittee may have at this time. For further information on this statement, please contact David Trimble at (202) 512-3841 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Other staff that made key contributions to this testimony include Diane LoFaro, Assistant Director; Jamie Meuwissen; Angela Miles; and Dan Semick. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the research and development activities of the Environmental Protection Agency (EPA) and the findings of our recent report on the agency's laboratory enterprise. EPA was established in 1970 to consolidate a variety of federal research, monitoring, standard-setting, and enforcement activities into one agency for ensuring the joint protection of environmental quality and human health. Scientific research, knowledge, and technical information are fundamental to EPA's mission and inform its standard-setting, regulatory, compliance, and enforcement functions. The agency's scientific performance is particularly important as complex environmental issues emerge and evolve, and controversy continues to surround many of the agency's areas of responsibility. Unlike other primarily science-focused federal agencies, such as the National Institutes of Health or the National Science Foundation, EPA's scientific research, technical support, and analytical services underpin the policies and regulations the agency implements. Therefore, the agency operates its own laboratory enterprise. This enterprise is made up of 37 laboratories that are housed in about 170 buildings and facilities located in 30 cities across the nation. Specifically, EPA's Office of Research and Development (ORD) operates 18 laboratories with primary responsibility for research and development. Four of EPA's five national program offices operate nine laboratories with primary responsibility for supporting regulatory implementation, compliance, enforcement, and emergency response. Each of EPA's 10 regional offices operates a laboratory with responsibilities for a variety of applied sciences; analytical services; technical support to federal, state, and local laboratories; monitoring; compliance and enforcement; and emergency response. Over the past 20 years, independent evaluations by the National Research Council and others have addressed planning, coordination, or leadership issues associated with EPA's science activities. The scope of these evaluations varied, but collectively they recognized the need for EPA to improve long-term planning, priority setting, and coordination of laboratory activities; establish leadership for agencywide scientific oversight and decision making; and better manage the laboratories' workforce and infrastructure. When it was established in 1970, EPA inherited 42 laboratories from programs in various federal departments. According to EPA's historian, EPA closed or consolidated some laboratories it inherited and created additional laboratories to support its mission. Nevertheless, EPA's historian reported that the location of most of EPA's present laboratories is largely the same as the location of its original laboratories in part because of political objections to closing facilities and conflicting organizational philosophies, such as operating centralized laboratories for efficiency versus operating decentralized laboratories for flexibility and responsiveness. Other federal agencies face similar challenges with excess and underused property. Because of these challenges, GAO has designated federal real property as an area of high risk. This statement summarizes the findings of our report issued in July of this year that examines the extent to which EPA (1) has addressed the findings of independent evaluations performed by the National Research Council and others regarding long-term planning, coordination, and leadership issues; (2) uses an agencywide, coordinated approach for managing its laboratory physical infrastructure; and (3) uses a comprehensive planning process to manage its laboratory workforce. EPA has taken some actions but has not fully addressed the findings and recommendations of five independent evaluations over the past 20 years regarding long-standing planning, coordination, and leadership issues that hamper the quality, effectiveness, and efficiency of its science activities, including its laboratory operations. First, EPA has yet to fully address planning and coordination issues identified by a 1992 independent, expert panel evaluation that recommended that EPA develop and implement an overarching issue-based planning process that integrates and coordinates scientific efforts throughout the agency, including the important work of its 37 laboratories. Second, EPA has also not fully addressed recommendations from a 1994 independent evaluation by the MITRE Corporation to consolidate and realign its laboratory facilities and workforce--even though this evaluation found that the geographic separation of laboratories hampered their efficiency and technical operations and that consolidation and realignment could improve planning and coordination issues that have hampered its science and technical community for decades. Third, EPA has not fully addressed recommendations from the independent evaluations regarding leadership of its research and laboratory operations. On the basis of our analysis of EPA's facility master planning process, we found that EPA manages its laboratory facilities on a site-by-site basis and does not evaluate each site in the context of all the agency's real property holdings--as recommended by the National Research Council report in 2004. EPA's facility master plans are intended to be the basis for justifying its building and facilities spending, which was $29.9 million in fiscal year 2010, and allocating those funds to specific repair and improvement projects. Master plans should contain, among other things, information on mission capabilities, use of space, and condition of individual laboratory sites. In addition, we found that most facility master plans were out of date. EPA's real property asset management plan states that facility master plans are supposed to be updated every 5 years to reflect changes in facility condition and mission, but we found that 11 of 20 master plans were out of date and 2 of 20 had not been created yet. EPA does not use a comprehensive planning process for managing its laboratories' workforce. For example, we found that not all of the regional and program offices with laboratories prepared workforce plans as part of an agencywide planning effort in 2007, and for those that did, most did not specifically address their laboratories' workforce. In fact, some regional management and human resource officials we spoke with were unaware of the requirement to submit workforce plans to the Office of Human Resources. Some of these managers told us the program and regional workforce plans were a paperwork exercise, irrelevant to the way the workforce is actually managed. Managers in program and regional offices said that workforce planning for their respective laboratories is fundamentally driven by the annual budgets of program and regional offices and ceilings for full-time equivalents (FTE). |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Global exports of defense equipment have decreased significantly since the end of the Cold War in the late 1980s. Major arms producing countries, such as the United States and those in Western Europe, have reduced their procurement of defense equipment by about one-quarter of 1986 levels based on constant dollars. Overall, European nations have decreased their defense research and development spending over the last 3 years, which is one-third of the relatively stable U.S. research and development funding. Defense exports have declined over 70 percent between 1987 and 1994. In response to decreased demand in the U.S. defense market, U.S. defense companies have consolidated, merged with other companies, or sold off their less profitable divisions, and they are seeking sales in international markets to make up lost revenue. These companies often compete with European defense companies for sales in Europe and in other parts of the world. The U.S. government, led by DOD, has maintained bilateral trade agreements with 21 of its allies, including most European countries, to address barriers to defense trade and international cooperation. No multilateral agreement exists on defense trade issues. Bilateral agreements have been established to provide a framework for discussions about opening defense markets with those countries as a way of improving the interoperability and standardization of equipment among North Atlantic Treaty Organization (NATO) allies. The United States has enjoyed a favorable balance of defense trade, which is still an issue of contention with some of the major arms producing countries in Europe. This trade imbalance was cited in a 1990 U.S. government study as a justification for European governments requiring defense offsets. However, because European investment in defense research and development is significantly below U.S. levels, a Department of Commerce official stated that European industry is at a competitive disadvantage in meeting future military performance requirements. Reciprocal trade agreements recognize the need to develop and maintain an advanced technological capability for NATO and enhance equipment cooperation among the individual European member nations. A senior NATO official stated that Europe’s ability to develop an independent security capability within NATO and meet its fair share of alliance obligations is contingent on its ability to consolidate its defense industrial base. This official indicated that if such a consolidation does not occur, then European governments may be less willing to meet their NATO obligations. European governments have made slow gradual progress in developing and implementing unified armament initiatives. These initiatives are slow to evolve because the individual European nations often have conflicting goals and views on implementing procedures and a reluctance to yield national sovereignty. In addition, the various European defense organizations do not include all of the same member countries, making it difficult to establish a pan-European armament policy. European officials see the formation of a more unified European defense market as crucial to the survival of their defense industries as well as their ability to maintain an independent foreign and security policy. Individual national markets are seen as too small to support an efficient industry, particularly in light of declining defense budgets. At the same time, mergers and consolidations of U.S. defense companies are generating concern about the long-term competitiveness of a smaller, fragmented European defense industry. In the past, European governments made several attempts to integrate the European defense market using a variety of organizations. The Western European Union (WEU), the European Union, and NATO are among the institutions composed of different member nations that have addressed European armament policy issues (see fig. 1). For example, in 1976, the defense ministers of the European NATO nations established the Independent European Program Group as a forum for armament cooperation. This group operated without a legal charter, and its decisions were not binding among the member nations. In 1992, the European defense ministers decided that the group’s functions should be transferred to WEU, and the Western European Armaments Group was later created as the forum within WEU for armament cooperation. In 1991, WEU called for an examination of opportunities to enhance armament cooperation with the goal of creating a European armaments agency. WEU declared that it would develop as the defense component of the European Union and would formulate a common European defense policy. It also agreed to strengthen the European pillar within NATO. Under WEU, the Western European Armaments Group studied development of an armaments agency that would undertake procurement on behalf of member nations, but agreement could not be reached on the procurement procedures such an agency would follow. Appendix I is a chronology of key events associated with the development of an integrated European defense market. In 1996, two new armament agencies were formed. OCCAR was created as a joint management organization for France, Germany, Italy, and the United Kingdom, and the Western European Armaments Organization (WEAO) was created as a subsidiary body of WEU. As shown in table 1, the two agencies are separate entities with different functions. OCCAR was created as a result of French and German dissatisfaction with the lack of progress WEU was making in establishing a European armaments agency. Joined by Italy and the United Kingdom, the four nations agreed on November 12, 1996, to form OCCAR as a management organization for joint programs involving two or more member nations. OCCAR’s goals are to create greater efficiency in program management and facilitate emergence of a more unified market. Although press accounts raised concerns that OCCAR member countries would give preference to European products, no such preference was included in OCCAR’s procurement principles. Instead, it was agreed that an OCCAR member would give preference to procuring equipment that it helped to develop. In establishing OCCAR, the Defense ministers of the member countries agreed that OCCAR was to have a competitive procurement policy. Competition is to be open to all 13 member countries of the Western European Armaments Group. Other countries, including the United States, will be invited to compete when OCCAR program participants unanimously agree to open competitions to these countries based on reciprocity. OCCAR officials have indicated that procedures for implementing the competition policy, including criteria for evaluating reciprocity, have not yet been defined. According to some U.S. government and industry officials, issues to consider will include whether U.S. companies will be excluded from OCCAR procurement or whether OCCAR procurement policy will be consistent with the reciprocal trade agreements between member countries and the United States. OCCAR’s impact on the European defense market will largely depend on the number of programs that it manages. OCCAR members are discussing integrating additional programs in the future but are expected to only administer joint programs involving participating nations, thereby excluding transatlantic, NATO, or European cooperative programs involving non-OCCAR nations. Some European nations, such as France and Germany, are committed to undertaking new programs on a cooperative basis. While intra-European cooperation is not new, French Ministry of Defense officials have indicated that this represents a change for France because they no longer intend to develop a wide range of weapon programs on their own. On November 19, 1996, a week after OCCAR was created, the WEU Ministerial Council established WEAO to improve coordination of collaborative defense research projects by creating a single contracting entity. As a WEU subsidiary body, WEAO has legal authority to administer contracts, unlike OCCAR, which operates without a legal charter and has no authority to sign contracts for the programs it is to administer. WEAO’s initial task is to manage the Western European Armaments Group’s research and technology activities, while OCCAR is to manage the development and procurement of weapon systems. The WEAO executive body has responsibility for soliciting and evaluating bids and awarding contracts for common research activities. This single contracting entity eliminated the need to administer contracts through the different national contracting authorities. According to WEAO documentation, the organization was intentionally designed to allow it to evolve into a European armaments agency. However, it may take several years before the effect of OCCAR and WEAO procurement policies can be fully assessed. Some European government officials also told us that OCCAR’s ability to centrally administer contracts is curtailed until OCCAR obtains legal authority. U.S. government and industry officials are watching to see whether OCCAR and other initiatives are fostering political pressure and tendencies toward pan-European exclusivity. As membership of the various European organizations expands, pressure to buy European defense equipment may increase. For example, according to some industry officials, the new European members of NATO are already being encouraged by some Western European governments to buy European defense products to ease their entry into other European organizations. While European government initiatives appear to be making slow, gradual progress, the European defense industry is attempting to consolidate and restructure through national and cross-border mergers, acquisitions, joint ventures, and consortia. European government and industry observers have noted that European defense industry is reacting to pressures from rapid U.S. defense industry consolidation, tighter defense budgets, and stronger competition in the global defense market. Even with such pressures, other observers have noted that European defense companies are consolidating at a slower pace than U.S. defense companies. The combined defense expenditures of Western Europe are about 60 percent of the U.S. defense budget, but Western Europe has two to three times more suppliers, according to a 1997 Merrill Lynch study. For example, the United States will have two major suppliers in the military aircraft sector (once proposed mergers are approved), while six European nations each have at least one major supplier of military combat aircraft. In terms of defense revenues, U.S. defense companies tend to outpace European defense companies. Among the world’s top 10 arms producing companies in 1994, 8 were U.S. companies and 2 were European companies. While economic pressures to consolidate exist, European defense companies face several obstacles, according to European government and industry officials. For example, national governments, which greatly influence the defense industry and often regard their defense companies as sovereign assets, may not want a cross-border consolidation to occur because it could reduce the national defense industrial base or make it too specialized. National governments further impede defense industrial integration by establishing different defense equipment requirements. Complex ownership structures also make cross-border mergers difficult because many of the larger European defense companies are state-owned or part of larger conglomerates. To varying degrees, defense industry restructuring has occurred within the borders of major European defense producing nations, including France, Germany, Italy, and the United Kingdom. In France, Thomson CSF and Aerospatiale formed a company, Sextant Avionique, that regrouped and merged their avionics and flight electronics activities. The French government initiated discussions in 1996 about the merger of the aviation companies Aerospatiale and Dassault, but negotiations are ongoing. In Germany, restructuring has primarily occurred in the aerospace sector. In 1995, Deutsche Aerospace became Daimler-Benz Aerospace, which includes about 80 percent of German industrial capabilities in aerospace. In Italy, by 1995 Finmeccanica had gained control of about three-quarters of the Italian defense industry, including Italy’s major helicopter manufacturer Agusta and aircraft manufacturer Alenia. In the United Kingdom, a number of mergers and acquisitions have occurred. For example, GKN purchased the helicopter manufacturer Westland and GEC purchased the military vehicle and shipbuilder VSEL in 1994. European companies have long partnered on cooperative armament programs for the development and production of large complex weapon systems in Europe. Often, a central management company has been created to manage the relationship between partners. For example, major aerospace companies from the United Kingdom, Germany, Italy, and Spain have created a consortium to work on the Eurofighter 2000 program. Another cooperative venture is the development of the European military transport aircraft known as the Future Large Aircraft. Companies from a number of European nations are forming a joint venture company for the development and production of this aircraft. Project based joint ventures are typically industry led, but they are established with the consent of the governments involved. (See table 2 for examples of European defense company cooperative business activities for major weapon programs.) Although most cross-border industry cooperation is project specific, European defense companies are also acquiring companies or establishing joint ventures or cross-share holdings that are not tied to a particular program. Some cross-border European consolidation has occurred in missiles, defense electronics, and space systems. For example, in 1996, Matra (France) and British Aerospace (United Kingdom) merged their missile activities to form Matra BAe Dynamics. Both companies retained a 50-percent share in the joint venture, but they have a single management structure and a plan to gradually integrate their missile manufacturing facilities. Figure 2 highlights some examples of consolidation in specific defense sectors. Despite attempts to develop a unified European armament policy, individual European governments still retain their own defense procurement policies. Key European countries, including France, Germany, Italy, the Netherlands, and the United Kingdom, vary in their willingness to purchase major U.S. defense equipment. These countries have been involved in efforts to form a unified European defense market, which some observers believe may lead to excluding U.S. defense companies from participating in that market. However, U.S. defense companies continue to sell significant defense equipment to certain European countries in certain product lines. Europe has a large, diverse defense industrial base on which key European nations rely for purchases of major defense equipment. As in the United States, these European countries purchase the majority of their defense equipment from national sources. For example, the United Kingdom aims to competitively award about three-quarters of its defense contracts, with U.K. companies winning at least 90 percent of the contracts over the past several years. According to French Ministry of Defense officials, imports represented only 2 percent of France’s total defense procurements over the past 5 years. Germany and Italy each produced at least 80 percent of their national requirements for military equipment over the past several years. Despite its relatively small size, the Dutch defense industry supplied the majority of defense items to the Netherlands. Notwithstanding European preference for domestically developed weapons, U.S. defense companies have sold a significant amount of weapons to Western European countries either directly or through the U.S. government’s Foreign Military Sales program. These sales tended to be concentrated in certain countries and products. U.S. foreign military sales of defense equipment to Europe accounted for about $20 billion from 1992 to 1996. Europe was the second largest purchaser of U.S. defense items based on arms delivery data, following the Middle East. The leading European purchasers of U.S. defense equipment were Turkey, Finland, Greece, Switzerland, the Netherlands, and the United Kingdom. U.S. defense companies had greater success in selling aircraft and missiles to Western Europe than they did for tanks and ships. Of the almost $20 billion of U.S. foreign military sales, about $15 billion, or 75 percent, was for sales of military aircraft, aircraft spares, and aircraft modifications. About $3 billion, or 13 percent of total equipment sales, was for sales of missiles. Ships and military vehicles accounted for $552 million, or less than 3 percent of the total U.S. defense equipment sales. Figure 3 shows U.S. defense equipment sales to Western Europe by major weapon categories. According to U.S. defense company officials, sales of military aircraft to Europe are expected to be important in future competitions, particularly in the emerging defense markets in central Europe. Competition between major U.S. and European defense companies for aircraft sales in these markets is expected to be intense. U.S. defense companies varied in their success in winning the major European defense competitions that were open to foreign bidders. The Netherlands and the United Kingdom have bought major U.S. weapon systems over the last 5 years, even when European options were available. The United States is the largest supplier of defense imports to both the Netherlands and the United Kingdom. Both of these countries have stated open competition policies that seek the best defense equipment for the best value. In the major defense competitions in these countries in which U.S. companies won, U.S. industry and government officials stated that the factors that contributed to the success included the uniqueness and technical sophistication of the U.S. systems, industrial participation opportunities offered to local companies, and no domestically developed product was in the competition. For example, in the sale of the U.S. Apache helicopter to the Netherlands and the United Kingdom, there was no competing domestically developed national option, the product was technically sophisticated, and significant industrial participation was offered to domestic defense companies. In the major defense competitions in which U.S. companies competed in the United Kingdom over the last 5 years, the U.K. government tended to chose a domestically developed product when one existed. In some cases, these products contained significant U.S. content. For example, in the competition for the U.K. Replacement Maritime Patrol Aircraft, the two U.S. competing products lost to a British Aerospace developed product, the upgraded NIMROD aircraft. This British Aerospace product, however, contained significant U.S. content with major components coming from such companies as Boeing. In the Conventionally Armed Standoff Missile competition, Matra British Aerospace Dynamics’ Stormshadow (a U.K.-French developed option) won. In this case, the competing U.S. products were competitively priced, met the technical requirements, and would have provided significant opportunities for U.K. industrial participation. Table 3 provides details on some U.K. major procurements in which U.S. defense companies competed. France has purchased major U.S. defense weapon systems when no French or European option is available. In contrast to the Netherlands and the United Kingdom, the French defense procurement policy has been to first buy equipment from French sources, then to pursue European cooperative solutions, and lastly to import a non-European item. Recently, French armament policy has put primary emphasis on European cooperative programs, recognizing that it will not be economical to develop major systems alone in the future. The procurement policy reflects France’s goal to retain a defense industrial base and maintain autonomy in national security matters. As illustrated in table 4, the French government made two significant purchases from the United States in 1995 when it was not economical for French companies to produce comparable equipment or when it would have taken too long to develop. Germany and Italy have made limited purchases of U.S. defense equipment in recent years because of significantly reduced defense procurement budgets and commitments to European cooperative projects. Both countries now have an open competition defense procurement policy and buy a mixture of U.S. and European products. The largest share of these countries’ defense imports is supplied by the United States. In recent major defense equipment purchases from the United States, both Germany and Italy reduced quantities to reserve a portion of their procurement funding for European cooperative solutions. For example, Italy purchased the U.S. C-130J transport aircraft but continued to provide funding for a cooperative European transport aircraft program. As in the other European countries, Germany and Italy encourage U.S. companies to provide opportunities for local industrial participation when selling defense equipment. Table 5 highlights German defense procurement policy and a selected major procurement. As European nations work toward greater armament cooperation, competition for sales in Europe is likely to increase. To mitigate potential protectionism and negative effects on U.S.-European defense trade, both the U.S. defense industry and government have taken steps to improve transatlantic cooperation. U.S. defense companies are taking the lead in forming transatlantic ties to gain access to the European market. The U.S. government is also seeking opportunities to form transatlantic partnerships with its European allies on defense equipment development and production, but some observers point to practical and cultural impediments that affect the extent of such cooperation. U.S. defense companies are forming industrial partnerships with European companies to sell defense equipment to Europe because of the need to increase international sales, satisfy offset obligations, and maintain market access. Most of these partnerships are formed to bid on a particular weapon competition. Some, however, are emerging to sell products to worldwide markets. According to U.S. defense companies, partnering with European companies has become a necessary way of doing business in Europe. U.S. government and defense company officials have cited the importance of industrial partnerships with European companies in winning defense sales there. Many of these partnerships arose out of U.S. companies’ need to fulfill offset obligations on European defense sales by providing European companies with subcontract work. When U.S. companies had to find ways to satisfy the customary 100-percent offset obligation on defense contracts in Europe, they began to form industrial partnerships with European companies. With the declining U.S. defense budget after the end of the Cold War, many U.S. companies began to look for ways to increase their international defense sales in Europe and elsewhere. According to some U.S. company officials, they realized that many European government buyers did not want to buy commercially available defense equipment but wanted their own companies to participate in producing weapon systems to maintain their defense industrial base. Forming industrial partnerships was the only way that U.S. companies believed they could win sales in many European countries that were trying to preserve their own defense industries. In addition, several U.S. company officials have indicated that European governments have been pressuring each other in the last several years to purchase defense equipment from European companies before considering U.S. options. These officials stated that even countries that do not have large defense industries to support were being encouraged by other European countries to purchase European defense equipment for the economic good of the European Union. U.S. company officials believe that by forming industrial partnerships with European companies, they increase their ability to win defense contracts in Europe. U.S. defense companies form a variety of industrial partnerships with European companies, including subcontracting arrangements, joint ventures, international consortia, and teaming agreements. The various examples of each are discussed in table 6. According to some U.S. defense company officials, most of U.S. industrial partnerships with European companies, whatever the form, are to produce or market a specific defense item. Some U.S. defense companies, however, are using the partnerships to create long-term alliances and interdependencies with European companies that extend beyond one sale. For example, Lockheed Martin has formed an industrial partnership with the Italian company Alenia to convert an Italian aircraft to satisfy an emerging market for small military transport aircraft. This arrangement arose out of a transaction involving the sale of C-130J transport aircraft to Italy. Some U.S. defense company officials see the establishment of long-term industrial partnerships as a way of improving transatlantic defense trade and countering efforts toward European protectionism. DOD has taken a number of steps over the last few years to improve defense trade and transatlantic cooperation. For example, it has revised its guidance on considering foreign suppliers in defense acquisitions and has removed some of the restrictions on buying defense equipment from overseas. In addition, senior DOD officials have shown renewed interest in international cooperative defense programs with U.S. allies in Europe and are actively seeking such opportunities. Despite some of these efforts, some observers have cautioned that a number of factors may hinder shifts in U.S.-European defense cooperative production programs on major weapons. The following U.S. policy changes have been made that may help to improve defense trade: A DOD directive issued in March 1996 sets out a hierarchy of acquiring defense equipment that places commercially available equipment from allies and cooperative development programs with allies, ahead of a new U.S. equipment development program. According to some U.S. government and defense industry officials, many military program managers traditionally would have favored a new domestic development program when deciding how to satisfy a military requirement. In April 1997, the Office of the Secretary of Defense announced that DOD would favorably consider requests for transfers of software documentation to allies. In the past, such requests were often denied, which was cited by U.S. government officials as a barrier to improve defense trade and cooperation with the United States. In April 1997, the Under Secretary of Defense (Acquisition and Technology) waived certain buy national restrictions for countries with whom the United States had reciprocal trade agreements. This waiver allows DOD to procure from foreign suppliers certain defense equipment that were previously restricted to domestic sources. European government officials have cited U.S. buy-national restrictions as an obstacle in the improvement of the reciprocal defense trade balance between the United States and Europe. DOD is also seeking ways to improve international cooperative programs with European countries through ongoing working groups and a special task force under the quadrennial review. Senior DOD officials have stated that the United States should take advantage of international armaments cooperation to leverage U.S. resources through cost-sharing and to improve standardization and interoperability of defense equipment with potential coalition partners. The U.S. government has participated in numerous international defense equipment cooperation activities with European countries, including research and development programs, data exchange agreements, and engineer and scientist exchanges, but these activities only occasionally resulted in cooperative production programs. More recently, senior DOD officials have provided increased attention to armaments cooperation with U.S. allies. In 1993, DOD established the Armaments Cooperation Steering Committee to improve cooperative programs. In its ongoing efforts, the Steering Committee established several International Cooperative Opportunities Groups in 1995 to address specific issues in armaments cooperation. In addition, the 1997 Quadrennial Defense Review to identify military modernization needs included an international cooperation task force to determine which defense technology areas the United States could collaborate on with France, Germany, and the United Kingdom. In March 1997, the Secretary of Defense signed a memorandum stating that “it is DOD policy that we utilize international armaments cooperation to the maximum extent feasible.” The U.S. government has a few ongoing cooperative development programs for major weapon systems, but most cooperative programs are at the technology level. Some observers indicated to us that there may be some impediments to pursuing U.S.-European defense cooperative programs on major weapon systems because (1) European procurement budgets are limited compared to the U.S. budget; (2) the potential that U.S. support for a program may change with each annual budget review may cause some European governments concerns; (3) despite changes in DOD guidance, many military service program managers may be reluctant to engage in international cooperative programs due to the significant additional work that may be required and potential barriers that may arise, such as licensing and technology sharing restrictions; (4) many U.S. program managers may not consider purchasing from a foreign source due to the perceived technological superiority of U.S. weapons; and (5) European and U.S. governments have shown a desire to maintain an independent ability to provide for their national defense. Efforts have been made to develop a more unified European armament policy and defense industrial base. As regional unification efforts evolve, individual European nations still independently make procurement decisions, and these nations vary in their willingness to buy major U.S. weapon systems when European options exist. To maintain market access in Europe, U.S. defense companies have established transatlantic industrial partnerships. These industrial partnerships appear to be evolving more readily than transatlantic cooperative programs led by governments. Although the U.S. government has recently taken steps to improve defense trade and cooperation, some observers have indicated that practical and cultural impediments can affect transatlantic cooperation on major weapon programs. In commenting on a draft of this report, DOD concurred with the report and the Department of Commerce stated that it found the report to be accurate and had no specific comments or recommended changes. The comments from DOD and the Department of Commerce are reprinted in appendixes II and III, respectively. DOD also separately provided some technical suggestions, which we have incorporated in the text where appropriate. To identify European government defense integration plans and activities, we examined European Union, WEU, OCCAR, and NATO documents and publications. We developed a chronology of key events associated with the development of an integrated European defense market. We interviewed European Union, Western European Armaments Group, OCCAR, and NATO officials about European initiatives affecting trade and cooperation and their progress in meeting their goals. We also discussed these issues with officials at the U.S. mission to NATO, the U.S. mission to the European Union, and U.S. embassies in France, Germany, and the United Kingdom. We interviewed or obtained written responses from officials from six major defense companies in France, Germany, and the United Kingdom about European industry consolidation. We identified relevant information and studies about European government and industry initiatives and discussed these issues with consulting firms and European think tanks. To assess how procurement polices of European nations affect U.S. defense companies’ market access, we focused our analysis on five countries. We selected France, Germany, and the United Kingdom because they have the largest defense budgets in Europe and their defense industries comprise 85 percent of European defense production. Italy and the Netherlands were selected because they are significant producers and buyers of defense equipment. These five countries are also current members or seeking membership in OCCAR. We interviewed officials from 13 U.S. defense companies on the basis of their roles as prime contractors and subcontractors and range of defense products sold in Europe. Most of these companies represented prime contractors. Eight of these were among the top 10 U.S. defense companies, based on fiscal year 1995 DOD prime contract awards. We also discussed the major defense competitions that U.S. companies participated in over the last 5 years and the factors that contributed to the competitions’ outcome with officials from these companies and with U.S. government officials. We discussed procurement policies with European and U.S. government officials. We met with Ministry of Defense officials in France, Germany, and the United Kingdom, as well as U.S. embassy officials in those countries. We did not conduct fieldwork in Italy or the Netherlands, but we did discuss these countries’ procurement policies with officials from their embassies in Washington, D.C. We also reviewed documents describing the procurement policies and procedures of the selected countries and U.S. government assessments and cables about major defense contract awards that occurred in these countries and discussed factors affecting these procurement awards with U.S. government and industry officials. We did not review documentation on the bids or contract awards. We collected and analyzed data on defense budgets and defense trade, including foreign military and direct commercial sales to identify buying patterns in Western Europe over the past 5 years. We only used the foreign military sales data to analyze sales by weapons category for the five countries and Western Europe. Direct commercial sales data, which are tracked by the State Department through export licenses, were not organized by weapon categories for the last 5 years. However, we reviewed congressional notification records for direct commercial sales over $14 million for the last 5 years to supplement our analysis of foreign military sales data. To determine actions the U.S. industry and government have taken in response to changes in the European defense environment, we interviewed defense company and U.S. government officials within DOD and the Departments of Commerce and State. With U.S. defense companies, we discussed their business strategies and the nature of the partnerships formed with European defense companies. We obtained and analyzed recently issued DOD directives and policy memorandums on defense trade and international cooperation and discussed the effectiveness of these policies with U.S. and foreign government officials and U.S. and European defense companies. We performed our review from January to September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Secretaries of State and Commerce. We will also make copies available to others upon request. Please contact me at (202) 512-4181 if you have any questions concerning this report. Major contributors to this report were Karen Zuckerstein, Anne-Marie Lasowski, and John Neumann. Western European Union (WEU) was established as a result of the agreements signed in Paris in October 1954 modifying the 1948 Brussels Treaty. Treaty of Rome wae signed creating the European community. The Independent European Programme Group was established to promote European cooperation in research, development, and production of defense equipment; improve transatlantic armament cooperation; and maintain a healthy European defense industrial base. The Treaty on European Union was signed in Maastricht but was subject to ratification. The WEU member states also met in Maastricht and invited members of the European Union to accede to WEU or become observers, and other European members of the North Atlantic Treaty Organization (NATO) to become associate members of WEU. The Council of the WEU held its first formal meeting with NATO. The European Defense Ministers decided to transfer the Independent European Programme Group's functions to WEU. The Maastricht Treaty was ratified and the European Community became the European Union. French and German Ministers of Defense decided to simplify the management for joint armament research and development programs. The proposal for a Franco-German procurement agency emerged. A NATO summit was held, which supported developing of a European Security and Defense Identity and strengthening the European pillar of the Alliance. WEU Ministers issued the Noordwijk Declaration, endorsing a policy document containing preliminary conclusions of the formation of the Common European Defense policy. The European Union Intergovernmental Conference, or constitutional convention, convened. The Defense Ministers of France, Germany, Italy, and the United Kingdom signed the political foundation document for the joint armaments agency Organisme Conjoint de Cooperation en Matiere d'Armament (OCCAR). The Western European Armaments Organization was established, creating a subsidiary body within WEU to administer research and development contracts. The four National Armaments Directors of France, Germany, Italy, and the United Kingdom met during the first meeting of the Board of Supervisors of OCCAR. The board reached decisions about OCCAR's organizational structure and programs to manage. The European Union Intergovernmental Conference concluded. A new treaty was drafted, but little advancement was made to developing a common foreign and security policy. The treaty called for the European Union to cooperate more closely with WEU, which might be integrated in the European Union if all member nations agree. The Board of Supervisors of OCCAR held a second meeting. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the changes that have taken place in the European defense market over the past 5 years, focusing on: (1) what actions European governments and industry have taken to unify the European defense market; (2) how key European countries' defense procurement practices have affected U.S. defense companies' ability to compete on major weapons competitions in Europe; and (3) how the U.S. government and industry have adapted their policies or practices to the changing European defense environment. GAO's review focused on the buying practices of five European countries--France, Germany, Italy, the Netherlands, and the United Kingdom. GAO noted that: (1) pressure to develop a unified European armament procurement policy and related industrial base is increasing, as most nations can no longer afford to develop and procure defense items solely form their own domestic companies; (2) European governments have taken several initiatives to integrate the defense market, including the formation of two new organizations to improve armament cooperation; (3) European government officials remain committed to cooperative programs, which have long been the impetus for cross-border defense cooperation at the industry level; (4) some European defense companies are initiating cross-border mergers that are not tied to government cooperative programs; (5) although some progress toward regionalization is occurring, European government and industry officials told GAO that national sovereignty issues and complex ownership structures may inhibit European defense consolidation from occurring to the extent that is needed to be competitive; (6) until European governments agree on a unified armament policy, individual European countries will retain their own procurement policies; (7) like the United States, European countries tend to purchase major defense equipment from their domestic companies when such options exist; (8) when national options do not exist, key European countries vary in their willingness to buy major U.S. weapon systems; (9) trans-Atlantic industrial partnerships appear to be evolving more readily than trans-Atlantic cooperative programs that are led by governments; (10) U.S. defense companies have established these trans-Atlantic partnerships largely to maintain market access in Europe; (11) U.S. defense company officials say they cannot export major defense items to Europe without involving European defense companies in the production of those items; (12) some U.S. defense companies are seeking long-term partnerships with European companies to develop a defense product line that will meet requirements in Europe or other defense markets; (13) they believe such industrial interdependence can also help counter any efforts toward U.S. or European protectionism and may increase trans-Atlantic defense trade; and (14) the U.S. government has taken several steps over the last few years to improve defense trade and trans-Atlantic cooperation, but some observers point to practical and cultural impediments that affect U.S.-European cooperation on major weapon programs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
For 16 years, DOD’s supply chain management processes have been on our list of high-risk areas needing urgent attention because of long-standing systemic weaknesses that we have identified in our reports. We initiated our high-risk program in 1990 to report on government operations that we identified as being at high risk for fraud, waste, abuse, and mismanagement. The program serves to identify and help resolve serious weaknesses in areas that involve substantial resources and provide critical services to the public. Removal of a high-risk designation may be considered when legislative and agency actions, including those in response to our recommendations, result in significant and sustainable progress toward resolving a high-risk problem. Key determinants include a demonstrated strong commitment to and top leadership support for addressing problems, the capacity to do so, a corrective action plan that provides for substantially completing corrective measures in the near term, a program to monitor and independently validate the effectiveness of corrective measures, and demonstrated progress in implementing corrective measures. Beginning in 2005, DOD developed a plan for improving supply chain management that could reduce its vulnerability to fraud, waste, abuse, and mismanagement and place it on the path toward removal from our list of high-risk areas. This supply chain management improvement plan, initially released in July 2005, contains 10 initiatives proposed as solutions to address the root causes of problems we identified from our prior work in the areas of requirements forecasting, asset visibility, and materiel distribution. DOD defines requirements as the need or demand for personnel, equipment, facilities, other resources, or services in specified quantities for specific periods of time or at a specified time. Accurately forecasted supply requirements are a key first step in buying, storing, positioning, and shipping items that the warfighter needs. DOD describes asset visibility as the ability to provide timely and accurate information on the location, quantity, condition, movement, and status of supplies and the ability to act on that information. Distribution is the process for synchronizing all elements of the logistics system to deliver the “right things” to the “right place” at the “right time” to support the warfighter. DOD’s success in improving supply chain management is closely linked with its overall defense business transformation efforts and completion of a comprehensive, integrated logistics strategy. In previous reports and testimonies, we have stated that progress in DOD’s overall approach to business transformation is needed to confront problems in other high-risk areas, including supply chain management. DOD has taken several steps intended to advance business transformation, including establishing new governance structures and aligning new information systems with its business enterprise architecture. Another key step to supplement these ongoing transformation efforts is completion of a comprehensive, integrated logistics strategy that would identify problems and capability gaps to be addressed, establish departmentwide investment priorities, and guide decision making. DOD’s success in improving supply chain management is closely linked with overall defense business transformation. Our prior reviews and recommendations have addressed business management problems that adversely affect the economy, efficiency, and effectiveness of DOD’s operations, and that have resulted in a lack of adequate accountability across several of DOD’s major business areas. We have concluded that progress in DOD’s overall approach to business transformation is needed to confront other high-risk areas, including supply chain management. DOD’s overall approach to business transformation was added to the high-risk list in 2005 because of our concern over DOD’s lack of adequate management accountability and the absence of a strategic and integrated action plan for the overall business transformation effort. Specifically, the high-risk designation for business transformation resulted because (1) DOD’s business improvement initiatives and control over resources are fragmented; (2) DOD lacks a clear strategic and integrated business transformation plan and investment strategy, including a well-defined enterprise architecture to guide and constrain implementation of such a plan; and (3) DOD has not designated a senior management official responsible and accountable for overall business transformation reform and related resources. In response, DOD has taken several actions intended to advance transformation. For example, DOD has established governance structures such as the Business Transformation Agency and the Defense Business Systems Management Committee. The Business Transformation Agency was established in October 2005 with the mission of transforming business operations to achieve improved warfighter support and improved financial accountability. The agency supports the Defense Business Systems Management Committee, which is comprised of senior-level DOD officials and is intended to serve as the primary transformation leadership and oversight mechanism. Furthermore, in September 2006, DOD released an updated Enterprise Transition Plan that is intended to be both a business transformation roadmap and management tool for modernizing its business process and underlying information technology assets. DOD describes the Enterprise Transition Plan as an executable roadmap aligned to DOD’s business enterprise architecture. In addition, as required by the National Defense Authorization Act for Fiscal Year 2006, DOD is studying the feasibility and advisability of establishing a Deputy Secretary for Defense Management to serve as DOD’s Chief Management Officer and advise the Secretary of Defense on matters relating to management, including defense business activities. Business systems modernization is a critical part of DOD’s transformation efforts, and successful resolution of supply chain management problems will require investment in needed information technology. DOD spends billions of dollars to sustain key business operations intended to support the warfighter, including systems and processes related to support infrastructure, finances, weapon systems acquisition, the management of contracts, and the supply chain. We have indicated at various times that modernized business systems are essential to the department’s effort in addressing its supply chain management issues. In its supply chain management improvement plan, DOD recognizes that achieving success in supply chain management is dependent on developing interoperable systems that can share critical supply data. One of the initiatives included in the plan is business system modernization, an effort that is being led by DOD’s Business Transformation Agency and includes achieving materiel visibility through systems modernization as one of its six enterprisewide priorities. Improvements in financial management are also integrally linked to DOD’s business transformation. Since our first report on the financial statement audit of a major DOD component over 16 years ago, we have repeatedly reported that weaknesses in business management systems, processes, and internal controls not only adversely affect the reliability of reported financial data, but also the management of DOD operations. Such weaknesses have adversely affected the ability of DOD to control costs, ensure basic accountability, anticipate future costs and claims on the budget, measure performance, maintain funds control, and prevent fraud. In December 2005, DOD issued its Financial Improvement and Audit Readiness Plan to guide its financial management improvement efforts. The Financial Improvement and Audit Readiness Plan is intended to provide DOD components with a roadmap for (1) resolving problems affecting the accuracy, reliability, and timeliness of financial information; and (2) obtaining clean financial statement audit opinions. It uses an incremental approach to structure its process for examining operations, diagnosing problems, planning corrective actions, and preparing for audit. The plan also recognizes that it will take several years before DOD is able to implement the systems, processes, and other changes necessary to fully address its financial management weaknesses. Furthermore, DOD has developed an initial Standard Financial Information Structure, which is DOD’s enterprisewide data standard for categorizing financial information. This effort focused on standardizing general ledger and external financial reporting requirements. While these steps are positive, defense business transformation is much broader and encompasses planning, management, organizational structures, and processes related to all key business areas. As we have previously observed, business transformation requires long-term cultural change, business process reengineering, and a commitment from both the executive and legislative branches of government. Although sound strategic planning is the foundation on which to build, DOD needs clear, capable, sustained, and professional leadership to maintain continuity necessary for success. Such leadership would provide the attention essential for addressing key stewardship responsibilities—such as strategic planning, performance management, business information management, and financial management—in an integrated manner, while helping to facilitate the overall business transformation effort within DOD. As DOD continues to evolve its transformation efforts, critical to successful reform are sustained leadership, organizational structures, and a clear strategic and integrated plan that encompasses all major business areas, including supply chain management. Another key step to supplement ongoing defense business transformation efforts is completion of a comprehensive, integrated logistics strategy that would identify problems and capability gaps to be addressed, establish departmentwide investment priorities, and guide decision making. Over the years, we have recommended that DOD adopt such a strategy, and DOD has undertaken various efforts to identify, and plan for, future logistics needs. However, DOD currently lacks an overarching logistics strategy. In December 2005, DOD issued its “As Is” Focused Logistics Roadmap, which assembled various logistics programs and initiatives associated with the fiscal year 2006 President’s Budget and linked them to seven key joint future logistics capability areas. The roadmap identified more than $60 billion of planned investments in these programs and initiatives, yet it also indicated that key focused logistics capabilities would not be achieved by 2015. Therefore, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed the department to prepare a rigorous “To Be” roadmap that would present credible options to achieve focused logistics capabilities. According to officials with the Office of the Secretary of Defense, the “To Be” logistics roadmap will portray where the department is headed in the logistics area and how it will get there, and will allow the department to monitor progress toward achieving its objectives, as well as institutionalize a continuous assessment process that links ongoing capability development, program reviews, and budgeting. It would identify the scope of logistics problems and capability gaps to be addressed and include specific performance goals, programs, milestones, resources, and metrics to guide improvements in supply chain management and other areas of DOD logistics. Officials anticipate that the initiatives in the supply chain management improvement plan will be incorporated into the “To Be” logistics roadmap. DOD has not established a target date for completing the “To Be” roadmap. According to DOD officials, its completion is pending the results of the department’s ongoing test of new concepts for managing logistics capabilities. The Deputy Secretary of Defense initiated this joint capability portfolio management test in September 2006 to explore new approaches for managing certain capabilities across the department, facilitating strategic choices, and improving the department’s ability to make capability trade-offs. The intent of joint capability portfolio management is to improve interoperability, minimize redundancies and gaps, and maximize effectiveness. Joint logistics is one of the four capability areas selected as test cases for experimentation. The joint logistics test case portfolio will include all capabilities required to project and sustain joint force operations, including supply chain operations. According to DOD officials, initial results of the joint logistics capability portfolio management test are expected to be available in late spring 2007, and the results of the test will then be used to complete the “To Be” logistics roadmap. The results of the test are also expected to provide additional focus on improving performance in requirements determination, asset visibility, and materiel distribution, officials said. We have also noted previously that while DOD and its component organizations have had multiple plans for improving aspects of logistics, the linkages among these plans have not been clearly shown. In addition to the supply chain management improvement plan, current DOD plans that address aspects of supply chain management include the Enterprise Transition Plan and component-level plans developed by the military services and the Defense Logistics Agency. Although we are encouraged by DOD’s planning efforts, the department lacks a comprehensive, integrated strategy to guide logistics programs and initiatives across the department. Without such a strategy, decision makers will lack the means to effectively guide program efforts and the ability to determine if these efforts are achieving the desired results. Although DOD is making progress implementing supply chain management initiatives, it is unable to demonstrate at this time the full extent to which it is improving supply chain management. DOD has established some high- level performance measures but they do not explicitly address the focus areas, and an improvement in those measures cannot be directly attributed to the initiatives. Further, the metrics in DOD’s supply chain management improvement plan generally do not measure performance outcomes and costs. In addition to implementing audit recommendations, as discussed in the next section of this report, DOD is making progress improving supply chain management by implementing initiatives in its supply chain management improvement plan. For example, DOD has met key milestones in its Joint Regional Inventory Materiel Management, Radio Frequency Identification, and Item Unique Identification initiatives. Through its Joint Regional Inventory Materiel Management initiative, DOD began to streamline the storage and distribution of defense inventory items on a regional basis, in order to eliminate duplicate materiel handling and inventory layers. Last year, DOD completed a pilot for this initiative in the San Diego region and, in January 2006, began a similar transition for inventory items in Oahu, Hawaii, which was considered operational in August 2006. In May 2006, DOD published an interim Defense Federal Acquisition Regulation clause governing the application of tags to different classes of assets being shipped to distribution depots and aerial ports for the Radio Frequency Identification initiative. The Item Unique Identification initiative, which provides for marking of personal property items with a set of globally unique data items to help DOD value and track items throughout their life cycle, received approval by the International Organization for Standardization/International Electrotechnical Commission in September 2006 for an interoperable solution for automatic identification and data capture based on widely used international standards. DOD has sought to demonstrate significant improvement in supply chain management within 2 years of the plan’s inception in July 2005; however, the department may have difficulty meeting its July 2007 goal. Some of the initiatives are still being developed or piloted and have not yet reached the implementation stage, others are in the early stages of implementation, and some are not scheduled for completion until 2008 or later. For example, according to DOD’s plan, the Readiness Based Sparing initiative, an inventory requirements methodology that the department expects will enable higher levels of readiness at equivalent or reduced inventory costs using commercial off-the-shelf software, is not expected to begin implementation until January 2008. The Item Unique Identification initiative, which involves marking personal property items with a set of globally unique data elements to help DOD track items during their life cycles, will not be completed until December 2010 under the current schedule. While DOD has generally stayed on track, it has reported some slippage in meeting scheduled milestones for certain initiatives. For example, a slippage of 9 months occurred in the Commodity Management initiative because additional time was required to develop a departmentwide approach. This initiative addresses the process of developing a systematic procurement approach to the department’s needs for a group of items. Additionally, according to DOD’s plan, the Defense Transportation Coordination initiative experienced a slippage in holding the presolicitation conference because defining requirements took longer than anticipated. Given the long-standing nature of the problems being addressed, the complexities of the initiatives, and the involvement of multiple organizations within DOD, we would expect to see further milestone slippage in the future. The supply chain management improvement plan generally lacks outcome- focused performance metrics that track progress in the three focus areas and at the initiative level. Performance metrics are critical for demonstrating progress toward achieving results, providing information on which to base organizational and management decisions, and are important management tools for all levels of an agency, including the program or project level. Moreover, outcome-focused performance metrics show results or outcomes related to an initiative or program in terms of its effectiveness, efficiency, impact, or all of these. To track progress toward goals, effective performance metrics should have a clearly apparent or commonly accepted relationship to the intended performance, or should be reasonable predictors of desired outcomes; are not unduly influenced by factors outside a program’s control; measure multiple priorities, such as quality, timeliness, outcomes, and cost; sufficiently cover key aspects of performance; and adequately capture important distinctions between programs. Performance metrics enable the agency to assess accomplishments, strike a balance among competing interests, make decisions to improve program performance, realign processes, and assign accountability. While it may take years before the results of programs become apparent, intermediate metrics can be used to provide information on interim results and show progress towards intended results. In addition, when program results could be influenced by external factors, intermediate metrics can be used to identify the program’s discrete contribution to the specific result. DOD’s plan does include four high-level performance measures that are being tracked across the department, and while they are not required to do so, these measures do not explicitly relate to the focus areas. The four measures are as follows: Backorders—number of orders held in an unfilled status pending receipt of additional parts or equipment through procurement or repair. Customer wait time—number of days between the issuance of a customer order and satisfaction of that order. On-time orders—percentage of orders that are on time according to DOD’s established delivery standards. Logistics response time—number of days to fulfill an order placed on the wholesale level of supply from the date a requisition is generated until the materiel is received by the retail supply activity. Additionally, these measures may be affected by many variables; hence, improvements in the high-level performance measures cannot be directly attributed to the initiatives in the plan. For example, implementing RFID at a few sites at a time has only a very small impact on customer wait time. However, variables such as natural disasters, wartime surges in requirements, or disruption in the distribution process could affect that measure. DOD’s supply chain materiel management regulation requires that functional supply chain metrics support at least one enterprise-level metric. DOD’s plan also lacks outcome-focused performance metrics for 6 of the 10 specific improvement initiatives contained in the plan. For example, while DOD intended to have RFID implemented at 100 percent of its U.S. and overseas distribution centers by September 2007—a measure indicating when scheduled milestones are met—it had not yet identified outcome- focused performance metrics that could be used to show the impact of implementation on expected outcomes, such as receiving and shipping timeliness, asset visibility, or supply consumption data. Two other examples of improvement initiatives that lack outcome-focused performance metrics are War Reserve Materiel, which aims to more accurately forecast war reserve requirements by using capability-based planning and incorporating lessons learned in Operation Iraqi Freedom, and Joint Theater Logistics, which is an effort to improve the ability of a joint force commander to execute logistics authorities and processes within a theater of operations. One of the challenges in developing departmentwide supply chain performance measures, according to a DOD official, is obtaining standardized, reliable data from noninteroperable systems. For example, the Army currently does not have an integrated method to determine receipt processing for Supply Support Activities, which could affect asset visibility and distribution concerns. Some of the necessary data reside in the Global Transportation Network while other data reside in the Standard Army Retail Supply System. These two databases must be manually reviewed and merged in order to obtain the information for accurate receipt processing performance measures. Nevertheless, we believe that intermediate measures, such as outcome-focused measures for each of the initiatives or for the focus areas, could show near-term progress. According to a DOD official, in September 2006, DOD awarded a year-long supply chain benchmarking contract to assess commercial supply chain metrics. The official indicated that six outcome measures were chosen for the initial effort: on-time delivery, order fulfillment cycle time, perfect order fulfillment, supply chain management costs, inventory days of supply, and forecast accuracy. Furthermore, the specific supply chains to be reviewed will be recommended by the various DOD components and approved by an executive committee. According to the same DOD official, the contractor will be looking at the specific supply chains approved and the industry equivalent; and a set of performance scorecards mapping the target supply segment to average and best-in-class performance from the comparison population will be developed for each supply chain and provided to the component. This assessment is a good step but it is too early to determine the effectiveness of this effort in helping DOD to demonstrate progress toward improving its supply chain management. Further, we noted that DOD has not provided cost metrics that might show efficiencies gained through supply chain improvement efforts. In addition to improving the provision of supplies to the warfighter and improving readiness of equipment, DOD’s stated goal in its supply chain management improvement plan is to reduce or avoid costs. However, 9 of the 10 initiatives in the plan lack cost metrics. Without outcome-focused performance and cost metrics for each of the improvement initiatives that are linked to the focus areas, such as requirements forecasting, asset visibility, and materiel distribution, it is unclear whether DOD is progressing toward meeting its stated goal. Over the last 5 years, audit organizations have made more than 400 recommendations that focused specifically on improving certain aspects of DOD’s supply chain management. DOD or the component organization concurred with almost 90 percent of these recommendations, and most of the recommendations that were closed as of the time of our review were considered implemented. We determined that the three focus areas of requirements forecasting, asset visibility, and materiel distribution accounted for 41 percent of the total recommendations made, while other inventory management and supply chain issues accounted for the remaining recommendations. We also grouped the recommendations into five common themes—management oversight, performance tracking, policy, planning, and processes. Several studies conducted by non-audit organizations have made recommendations that address supply chain management as part of a broader review of DOD logistics. Appendixes I through V summarize the audit recommendations we included in our baseline. Appendix VI summarizes recommendations made by non-audit organizations. In developing a baseline of supply chain management recommendations, we identified 478 supply chain management recommendations made by audit organizations between October 2001 and September 2006. DOD or the component organization concurred with 411 (86 percent) of the recommendations; partially concurred with 44 recommendations (9 percent); and nonconcurred with 23 recommendations (5 percent). These recommendations cover a diverse range of objectives and issues concerning supply chain management. For example, one recommendation with which DOD concurred was contained in our 2006 report on production and installation of Marine Corps truck armor. To better coordinate decisions about what materiel solutions are developed and procured to address common urgent wartime requirements, we recommended—and DOD concurred—that DOD should clarify the point at which the Joint Urgent Operational Needs process should be utilized when materiel solutions require research and development. In another case, DOD partially concurred with a recommendation in our 2006 report on Radio Frequency Identification (RFID), which consists of electronic tags that are attached to equipment and supplies being shipped from one location to another, enabling shipment tracking. To better track and monitor the use of RFID tags, we recommended—and DOD partially concurred—that the secretaries of each military service and the administrators of other components should determine requirements for the number of tags needed, compile an accurate inventory of the number of tags currently owned, and establish procedures to monitor and track tags, including purchases, reuse, losses, and repairs. In its response to our report, DOD agreed to direct the military services and the U.S. Transportation Command to develop procedures to address the reuse of the tags as well as procedures for the return of tags no longer required. However, the department did not agree to establish procedures to account for the procurement, inventory, repair, or losses of existing tags in the system. On the other hand, an example of a recommendation that DOD did not concur with was contained in our 2005 report on supply distribution operations. To improve the overall efficiency and interoperability of distribution-related activities, we recommended—but DOD did not concur—that the Secretary of Defense should clarify the scope of responsibilities, accountability, and authority between U.S. Transportation Command’s role as DOD’s Distribution Process Owner and other DOD components. In its response to our report, DOD stated that the responsibilities, accountability, and authority of this role were already clear. The audit organizations had closed 315 (66 percent) of the 478 recommendations at the time we conducted our review. Of the closed recommendations, 275 (87 percent) were implemented and 40 (13 percent) were not implemented as reported by the audit agencies. For example, one closed recommendation that DOD implemented was in our 2005 report on oversight of prepositioning programs. To address the risks and management challenges facing the department’s prepositioning programs and to improve oversight, we recommended that the Secretary of Defense direct the Chairman, Joint Chiefs of Staff assess the near-term operational risks associated with current inventory shortfalls and equipment in poor condition should a conflict arise. In response to our recommendation, the Joint Staff conducted a mission analysis on several operational plans based on the readiness of prepositioned assets. On the other hand, an example of a closed recommendation that DOD did not implement was in our 2003 report on Navy spare parts shortages. To provide a basis for management to assess the extent to which ongoing and planned initiatives will contribute to the mitigation of critical spare parts shortages, we recommended that the Secretary of Defense direct the Secretary of the Navy to develop a framework that includes long-term goals; measurable, outcome-related objectives; implementation goals; and performance measures as a part of either the Navy Sea Enterprise strategy or the Naval Supply Systems Command Strategic Plan. DOD agreed with the intent of the recommendation, but not the prescribed action. The recommendation was closed but not implemented because the Navy did not plan to modify the Naval Supply Systems Command Strategic Plan or higher-level Sea Enterprise Strategy to include a specific focus on mitigating spare parts shortages. Audit recommendations addressing the three focus areas in DOD’s supply chain management improvement plan—requirements forecasting, asset visibility, and materiel distribution—accounted for 196 (41 percent) of the total recommendations. The fewest recommendations were made in the focus area of distribution, accounting for just 6 percent of the total. Other inventory management issues accounted for most of the other recommendations. In addition, a small number of recommendations, less than 1 percent of the total, addressed supply chain management issues that could not be grouped under any of these other categories. In further analyzing the recommendations, we found that they addressed five common themes—management oversight, performance tracking, policy, planning, and processes. Table 1 shows the number of audit recommendations made by focus area and theme. Most of the recommendations addressed processes (38 percent), management oversight (30 percent), or policy (22 percent), with comparatively fewer addressing planning (7 percent) and performance tracking (4 percent). The management oversight theme includes any recommendations involving compliance, conducting reviews, or providing information to others. For example, the Naval Audit Service recommended that the Office of the Commander, U.S. Fleet Forces Command should enforce existing requirements that ships prepare and submit Ship Hazardous Material List Feedback Reports and Allowance Change Requests, whenever required. The performance tracking theme includes recommendations with performance measures, goals, objectives, and milestones. For example, the Army Audit Agency recommended that funding for increasing inventory safety levels be withheld until the Army Materiel Command develops test procedures and identifies key performance indicators to measure and assess its cost-effectiveness and impact on operational readiness. The policy theme contains recommendations on issuing guidance, revising or establishing policy, and establishing guidelines. For example, the DOD-IG recommended that the Defense Logistics Agency revise its supply operating procedures to meet specific requirements. The planning theme contains recommendations related to plan, doctrine, or capability development or implementation, as well as any recommendations related to training. For example, the Army Audit Agency recommended the Defense Supply Center in Philadelphia implement a Quality Assurance Surveillance Plan that encompasses all requirements of the prime vendor contract. The largest theme, processes, consists of recommendations that processes and procedures should be established or documented, and recommendations be implemented. For example, we recommended that the Secretary of Defense direct the service secretaries to establish a process to share information between the Marine Corps and Army on developed or developing materiel solutions. Studies conducted by non-audit organizations contain recommendations that address supply chain management as part of a broader review of DOD logistics. For example, the Center for Strategic and International Studies and the Defense Science Board suggested the creation of a departmentwide logistics command responsible for end-to-end supply chain operations. In July 2005, the Center for Strategic and International Studies issued a report, “Beyond Goldwater-Nichols: U.S. Government and Defense Reform for a New Strategic Era,” which addressed the entire U.S. national security structure, including the organization of logistics support. In this report, the study team acknowledged that recent steps, such as strengthening joint theater logistics and the existence of stronger coordinating authorities have significantly increased the unity of effort in logistical support to ongoing operations. However, according to the study, much of this reflects the combination of exemplary leadership and the intense operational pull of Operation Iraqi Freedom, and has not been formalized and institutionalized by charter, doctrine, or organizational realignment. It further noted that the fact that a single Distribution Process Owner was needed to overcome the fragmented structure of DOD’s logistical system underscores the need for fundamental reform. The study team recommended the integration of the management of transportation and supply warehousing functions under a single organization such as an integrated logistics command. The report noted that the Commission on Roles and Missions also had recommended the formation of a logistics command back in 1995. In 2005, the Summer Study Task Force on Transformation, under the direction of the Under Secretary of Defense for Acquisition, Technology, and Logistics, convened to assess DOD’s transformation progress, including the transformation of logistics capabilities. In this assessment, issued in February 2006, the Defense Science Board suggested that each segment in the supply chain is optimized for that specific function. For example, in the depot shipping segment of the supply chain, packages are consolidated into truck-size loads in order to fill the trucks for efficiency. Yet, optimizing each segment inevitably suboptimizes the major objective of end-to-end movement from source to user. The Defense Science Board report further indicated that although the assignment of the U.S. Transportation Command as the Distribution Process Owner was an important step towards addressing an end-to-end supply change, it did not go far enough to meet the objective of an effective supply chain. The necessary step is to assign a joint logistics command the authority and accountability for providing this essential support to global operations. Unlike recommendations made by audit agencies, DOD does not systematically track the status of recommendations made by non-audit organizations. Hence, in our analysis, we did not determine the extent to which DOD concurred with or implemented recommendations from these organizations. Overcoming systemic, long-standing problems requires comprehensive approaches. Improving DOD’s supply chain management will require continued progress in defense business transformation, including completion of a comprehensive, integrated strategy to guide the department’s logistics programs and initiatives. In addition, while DOD has made a commitment to improving supply chain management, as demonstrated by the development and implementation of the supply chain management improvement plan, the plan generally lacks outcome-focused performance metrics that would enable DOD to track and demonstrate the extent to which its individual efforts improve supply chain management or the extent of improvement in the three focus areas of requirements forecasting, asset visibility, and materiel distribution. Furthermore, without cost metrics, it will be difficult to show efficiencies gained through supply chain improvement initiatives. To improve DOD’s ability to guide logistics programs and initiatives across the department and to demonstrate the effectiveness, efficiency, and impact of its efforts to resolve supply chain management problems, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following two actions: Complete the development of a comprehensive, integrated logistics strategy that is aligned with other defense business transformation efforts, including the Enterprise Transition Plan. To facilitate completion of the strategy, DOD should establish a specific target date for its completion. Further, DOD should take steps as appropriate to ensure the supply chain management improvement plan and component-level logistics plans are synchronized with the department’s overall logistics strategy. Develop, implement, and monitor outcome-focused performance and cost metrics for all the individual initiatives in the supply chain management improvement plan as well as for the plan’s focus areas of requirements forecasting, asset visibility, and materiel distribution. In its written comments on a draft of this report, DOD concurred with our recommendations. The department’s response are reprinted in appendix VII. In response to our recommendation to complete the development of a comprehensive, integrated logistics strategy, DOD stated that the strategy is under development and is aligned with other defense business transformation efforts. DOD estimated that the logistics strategy would be completed 6 months after it completes the logistics portfolio test case in the spring of 2007. DOD did not address whether it would take steps to ensure the supply chain management improvement plan and component- level logistics plans are synchronized with the department’s overall logistics strategy. We continue to believe that these plans must be synchronized with the overall logistics strategy to effectively guide program efforts across the department and to provide the means to determine if these efforts are achieving the desired results. In response to our recommendation to develop, implement, and monitor outcome-focused performance and cost metrics, the department indicated it has developed and implemented outcome-focused performance and cost metrics for logistics across the department. However, DOD acknowledged that more work needs to be accomplished in linking the outcome metrics to the initiatives in the supply chain management improvement plan as well as for the focus areas of requirements forecasting, asset visibility, and materiel distribution. DOD stated that these linkages will be completed as part of full implementation of each initiative. We are pleased that the department recognized the need for linking outcome-focused metrics with the individual initiatives and the three focus areas in its supply chain management improvement plan. However, it is unclear from DOD’s response how and under what timeframes the department plans to implement this goal. As we noted in the report, DOD lacks outcome- focused performance metrics for supply chain management, in part because one of the challenges is obtaining standardized, reliable data from noninteroperable systems. In addition, initiatives in the supply chain management plan are many years away from full implementation. If DOD waits until full implementation to incorporate outcome-based metrics, it will miss opportunities to assess progress on an interim basis. We also continue to believe that cost metrics are critical for DOD to assess progress toward meeting its stated goal of improving the provision of supplies to the warfighter and improving readiness of equipment while reducing or avoiding costs through its supply chain initiatives. Our discussion of the integration of supply chain management with broader defense transformation efforts is based primarily on our prior reports and testimonies. We obtained information on DOD’s “To Be” logistics roadmap and the joint logistics capabilities portfolio management test from senior officials in the Office of the Deputy Under Secretary of Defense for Logistics, Materiel, and Readiness. We met regularly with DOD and OMB officials to discuss the overall status of the supply chain management improvement plan, the implementation schedules of the plan’s individual initiatives, and the plan’s performance measures. We visited and interviewed officials from U.S. Transportation Command, the Defense Logistics Agency, the military services, and the Joint Staff to gain their perspectives on improving supply chain management. To develop a baseline of recommended supply chain management improvements, we surveyed audit reports covering the time period of October 2001 to September 2006. We selected this time period because it corresponds with recent military operations that began with the onset of Operation Enduring Freedom and, later, Operation Iraqi Freedom. We surveyed audit reports issued by our office, the DOD-IG, the Army Audit Agency, the Naval Audit Service, and the Air Force Audit Agency. For each audit recommendation contained in these reports, we determined its status and focus. To determine the status of GAO recommendations, we obtained data from our recommendation tracking system. We noted whether DOD concurred with, partially concurred with, or did not concur with each recommendation. In evaluating agency comments on our reports, we have noted instances where DOD agreed with the intent of a recommendation but did not commit to taking any specific actions to address it. For the purposes of this report, we counted these as concurred recommendations. We also noted whether the recommendation was open, closed and implemented, or closed and not implemented. In a similar manner, we worked with DOD-IG and the service audit agencies to determine the status of their recommendations. We verified with each of the audit organizations that they agreed with our definition that a recommendation is considered “concurred with” when the audit organization determines that DOD or the component organization fully agreed with the recommendation in it entirety and its prescribed actions, and “partially concurred with” is when the audit organization determines that DOD or the component organization agreed to parts of the recommendation or parts of its prescribed actions. Furthermore, we verified that a recommendation is officially “closed” when the audit organization determines that DOD or the component organization has implemented its provisions or otherwise met the intent of the recommendation; when circumstances have changed, and the recommendation is no longer valid; or when, after a certain amount of time, the audit organization determines that implementation cannot reasonably be expected. We also verified that an “open” recommendation is one that has not been closed for one of the preceding reasons. We assessed the reliability of the data we obtained from DOD-IG and the service audit agencies by obtaining information on how they track and follow up on recommendations and determined that their data were sufficiently reliable for our purposes. In analyzing the focus of recommendations, we identified those addressing three specific areas—requirements forecasting, asset visibility, and materiel distribution—as well those addressing other supply chain management concerns. We selected these three focus areas as the framework for our analysis based on our prior work in this high-risk area and because DOD has structured its supply chain management improvement plan around them. We then analyzed the recommendations and further divided them into one of five common themes: management oversight, performance tracking, planning, process, and policy. To identify the focus area and theme for each report and recommendation, three analysts independently labeled each report with a focus area and identified a theme for each recommendation within the report. The team of analysts then reviewed the results, discussed any discrepancies, and reached agreement on the appropriate theme for each recommendation. In the event of a discrepancy which could not be immediately resolved, we referred to the original report to clarify what the intent of the report had been in order to decide on the appropriate focus area and theme. For the purpose of our analysis, if a recommendation consisted of multiple actions, we counted and classified each action separately. We excluded from our analysis recommendations that addressed only a specific piece of equipment or system. We also excluded recommendations that addressed other DOD high-risk areas, such as business systems modernization and financial management. While we included recommendations by non-audit organizations in our analysis, we did not determine the extent to which DOD concurred with or implemented them because their status is not systemically tracked. We conducted our review from January through November 2006 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Director, Office of Management and Budget; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; and other interested parties. This report will also be available at no charge on our Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-8365 or [email protected]. Key contributors to this report are listed in appendix VIII. To ensure that the services make informed and coordinated decisions about what materiel solutions are developed and procured to address common urgent wartime requirements, GAO recommended that the Secretary of Defense take the following two actions: (1) Direct the service secretaries to establish a process to share information between the Marine Corps and the Army on developed or developing materiel solutions, and (2) Clarify the point at which the Joint Urgent Operational Needs process should be utilized when materiel solutions require research and development. GAO recommended that the Secretary of Defense direct the Under Secretary of Defense, Acquisition, Technology and Logistics to ensure that the Director of the Defense Logistics Agency provide continual management oversight of the corrective actions to address pricing problems in the prime vendor program. GAO recommended that the Secretary of Defense take the following seven actions: To ensure DOD inventory management centers properly assign codes to categorize the reasons to retain items in contingency retention inventory, direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: (1) Direct the Secretary of the Army to instruct the Army Materiel Command to modify the Commodity Command Standard System so it will properly categorize the reasons for holding items in contingency retention inventory. (2) Direct the Secretary of the Air Force to instruct the Air Force Materiel Command to correct the Application Programs, Indenture system’s deficiency to ensure it properly categorizes the reasons for holding items in contingency retention inventory. To ensure that the DOD inventory management centers retain contingency retention inventory that will meet current and future operational requirements, direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: (3) Direct the Secretary of the Army to instruct the Army Materiel Command to require the Aviation and Missile Command to identify items that no longer support operational needs and determine whether the items need to be removed from the inventory. The Army Materiel Command should also determine whether its other two inventory commands, the Communications-Electronics Command and Tank-automotive and Armaments Command, are also holding obsolete items, and if so, direct those commands to determine whether the disposal of those items is warranted. To ensure that DOD inventory management centers conduct annual reviews of contingency retention inventory as required by DOD’s Supply Chain Materiel Management Regulation, direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to: (4) Direct the Director of the Defense Logistics Agency to require the Defense Supply Center Richmond to conduct annual reviews of contingency retention inventory. The Defense Logistics Agency should also determine whether its other two centers, the Defense Supply Center Columbus and the Defense Supply Center Philadelphia, are conducting annual reviews, and if not, direct them to conduct the reviews so they can ensure the reasons for retaining the contingency retention inventory are valid. (5) Direct the Secretary of the Navy to instruct the Naval Inventory Control Point Mechanicsburg to conduct annual reviews of contingency retention inventory. The Naval Inventory Control Point should also determine if its other organization, Naval Inventory Control Point Philadelphia, is conducting annual reviews and if not, direct the activity to conduct the reviews so it can ensure the reasons for retaining the contingency retention inventory are valid. (6) Direct the Secretary of the Army to instruct the Army Materiel Command to require the Aviation and Missile Command to conduct annual reviews of contingency retention inventory. The Army Materiel Command should also determine if its other two inventory commands, the Communications-Electronics Command and Tank- automotive and Armaments Command, are conducting annual reviews and if not, direct the commands to conduct the reviews so they can ensure the reasons for retaining the contingency retention inventory are valid. To ensure that DOD inventory management centers implement departmentwide policies and procedures for conducting annual reviews of contingency retention inventories, direct the Office of the Deputy Under Secretary of Defense for Logistics and Materiel Readiness to take the following action: (7) Revise the DOD’s Supply Chain Materiel Management Regulation to make clear who is responsible for providing recurring oversight to ensure the inventory management centers conduct the annual reviews of contingency retention inventory. To ensure funding needs for urgent wartime requirements are identified quickly, requests for funding are well documented, and funding decisions are based on risk and an assessment of the highest priority requirements, GAO recommended the Secretary of Defense direct the Secretary of the Army to establish a process to document and communicate all urgent wartime funding requirements for supplies and equipment at the time they are identified and the disposition of funding decisions. GAO recommended that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to take the following two actions: (1) Modify the July 30, 2004, RFID policy and other operational guidance to require that active RFID tags be returned for reuse or be reused by the military services and other users. (2) Direct the secretaries of each military service and administrators of other components to establish procedures to track and monitor the use of active RFID tags, to include determining requirements for the number of tags needed, compiling an accurate inventory of the number of tags establishing procedures to monitor and track tags, including purchases, reuse, losses, repairs, and any other categories that would assist management’s oversight of these tags. To improve accountability of inventory shipped to Army repair contractors, GAO recommended that the Secretary of Defense direct the Secretary of the Army to instruct the Commanding General, Army Materiel Command, to take the following six actions: (1) Establish systematic procedures to obtain and document contractors’ receipt of secondary repair item shipments in the Army’s inventory management systems, and to follow up on unconfirmed receipts within 45 days of shipment. (2) Institute policies, consistent with DOD regulations, for obtaining and documenting contractors’ receipt of government-furnished materiel shipments in the Army’s inventory management systems. (3) Provide quarterly status reports of all shipments of Army government-furnished materiel to Defense Contract Management Agency, in compliance with DOD regulations. (4) Examine the feasibility of implementing DOD guidance for providing advance notification to contractors at the time of shipment and, if warranted, establish appropriate policies and procedures for implementation. (5) Analyze receipt records for secondary repair items shipped to contractors and take actions necessary to update and adjust inventory management data prior to transfer to the Logistics Modernization Program. These actions should include investigating and resolving shipments that lack matching receipts to determine their status. (6) To ensure consistent implementation of any new procedures arising from the recommendations in this report, provide periodic training to appropriate inventory control point personnel and provide clarifying guidance concerning these new procedures to the command’s repair contractors. To enhance DOD’s ability to take a more coordinated and systemic approach to improving the supply distribution system, GAO recommended that the Secretary of Defense take the following three actions: (1) Clarify the scope of responsibilities, accountability, and authority between the Distribution Process Owner and the Defense Logistics Executive as well as the roles and responsibilities between the Distribution Process Owner, the Defense Logistics Agency, and Joint Forces Command. (2) Issue a directive instituting these decisions and make other related changes, as appropriate, in policy and doctrine. (3) Improve the Logistics Transformation Strategy by directing the Under Secretary of Defense (Acquisition, Technology, and Logistics) to include specific performance goals, programs, milestones, and resources to achieve focused logistics capabilities in the Focused Logistics Roadmap. To address the current underfunding of the Very Small Aperture Terminal and the Mobile Tracking System, GAO recommended that the Secretary of Defense direct the Secretary of the Army to determine whether sufficient funding priority has been be given to the acquisition of these systems and, if not, to take appropriate corrective action. To address the risks and management challenges facing the department’s prepositioning programs and improve oversight, GAO recommended that the Secretary of Defense take the following five actions: (1) Direct the Chairman, Joint Chiefs of Staff, to assess the near-term operational risks associated with current inventory shortfalls and equipment in poor condition should a conflict arise. (2) Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to provide oversight over the department’s prepositioning programs by fully implementing the department’s directive on war reserve materiel and, if necessary, revise the directive to clarify the lines of accountability for this oversight. (3) Direct the Secretary of the Army to improve the processes used to determine requirements and direct the Secretary of the Army and Air Force to improve the processes used to determine the reliability of inventory data so that the readiness of their prepositioning programs can be reliably assessed and proper oversight over the programs can be accomplished. (4) Develop a coordinated departmentwide plan and joint doctrine for the department’s prepositioning programs that identifies the role of prepositioning in the transformed military and ensures these programs will operate jointly, support the needs of the war fighter, and are affordable. (5) Report to Congress, possibly as part of the mandated October 2005 report, how the department plans to manage the near-term operational risks created by inventory shortfalls and management and oversight issues described in this report. Defense Logistics: Better Strategic Planning Can Help Ensure DOD's Successful Implementation of Passive Radio Frequency Identification (GAO-05- 345, September 12, 2005) GAO recommend that the Secretary of Defense take the following three actions: (1) Direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to expand its current RFID planning efforts to include a DOD-wide comprehensive strategic management approach that will ensure that RFID technology is efficiently and effectively implemented throughout the department. This strategic management approach should incorporate the following key management principles: an integrated strategy with goals, objectives, and results for fully implementing RFID in the DOD supply chain process, to include the interoperability of automatic information systems; a description of specific actions needed to meet goals and performance measures or metrics to evaluate progress toward achieving the goals; schedules and milestones for meeting deadlines; identification of total RFID resources needed to achieve full an evaluation and corrective action plan. (2) Direct the secretaries of each military service and administrators of other DOD military components to develop individual comprehensive strategic management approaches that support the DOD-wide approach for fully implementing RFID into the supply chain processes. (3) Direct the Under Secretary of Defense (Acquisition, Technology, and Logistics), the secretaries of each military service, and administrators of other military components to develop a plan that identifies the specific challenges impeding passive RFID implementation and the actions needed to mitigate these challenges. Such a plan could be included in the strategic management approach that GAO recommended they develop. To improve the effectiveness of DOD’s supply system in supporting deployed forces for contingencies, GAO recommended that the Secretary of Defense direct the Secretary of the Army to take the following three actions and specify when they will be completed: (1) Improve the accuracy of Army war reserve requirements and transparency about their adequacy by: updating the war reserve models with OIF consumption data that validate the type and number of items needed, modeling war reserve requirements at least annually to update the war reserve estimates based on changing operational and equipment requirements, and disclosing to Congress the impact on military operations of its risk management decision about the percentage of war reserves being funded. Concurred with intent, open (2) Improve the accuracy of its wartime supply requirements forecasting process by: developing models that can compute operational supply requirements for deploying units more promptly as part of prewar planning and providing item managers with operational information in a timely manner so they can adjust modeled wartime requirements as necessary. (3) Reduce the time delay in granting increased obligation authority to the Army Materiel Command and its subordinate commands to support their forecasted wartime requirements by establishing an expeditious supply requirements validation process that provides accurate information to support timely and sufficient funding. (4) GAO also recommended that the Secretary of Defense direct the Secretary of the Navy to improve the accuracy of the Marine Corps’ wartime supply requirements forecasting process by completing the reconciliation of the Marine Corps’ forecasted requirements with actual OIF consumption data to validate the number as well as types of items needed and making necessary adjustments to their requirements. The department should also specify when these actions will be completed. GAO recommended that the Secretary of Defense direct the Secretary of the Army and Director of the Defense Logistics Agency to take the following two actions: (5) Minimize future acquisition delays by assessing the industrial-base capacity to meet updated forecasted demands for critical items within the time frames required by operational plans as well as specify when this assessment will be completed, and (6) Provide visibility to Congress and other decision makers about how the department plans to acquire critical items to meet demands that emerge during contingencies. GAO also recommended the Secretary of Defense take the following three actions and specify when they would be completed: (7) Revise current joint logistics doctrine to clearly state, consistent with policy, who has responsibility and authority for synchronizing the distribution of supplies from the United States to deployed units during operations; (8) Develop and exercise, through a mix of computer simulations and field training, deployable supply receiving and distribution capabilities including trained personnel and related equipment for implementing improved supply management practices, such as radio frequency identification tags that provide in-transit visibility of supplies, to ensure they are sufficient and capable of meeting the requirements in operational plans; and (9) Establish common supply information systems that ensure the DOD and the services can requisition supplies promptly and match incoming supplies with unit requisitions to facilitate expeditious and accurate distribution. GAO continued to believe, as it did in April 1999, that DOD should develop a cohesive, departmentwide plan to ensure that total asset visibility is achieved. Specifically, GAO recommended that the Secretary of Defense develop a departmentwide long-term total asset visibility strategy as part of the Business Enterprise Architecture that: (1) Describes the complete management structure and assigns accountability to specific offices throughout the department, with milestones and performance measures, for ensuring timely success in achieving total asset visibility; (2) Identifies the resource requirements for implementing total asset visibility and includes related investment analyses that show how the major information technology investments will support total asset visibility goals; (3) Identifies how departmentwide systems issues that affect implementation of total asset visibility will be addressed; and (4) Establishes outcome-oriented total asset visibility goals and performance measures for all relevant components and closely links the measures with timelines for improvement. In addition, since 2001, GAO made a number of recommendations aimed at improving DOD’s refinement and implementation of the business management modernization program. Most recently, GAO identified the need to have component plans clearly linked to the long-term objectives of the department’s business management modernization program. As they relate to total asset visibility, GAO continued to believe that these recommendations were valid. To reduce the likelihood of releasing classified and controlled spare parts that DOD does not want to be released to foreign countries, GAO recommended that the Secretary of Defense take the following three actions: (1) Direct the Under Secretary of Defense for Policy, in conjunction with the Secretaries of the Army and the Navy, and direct the Secretary of the Air Force to develop an implementation plan, such as a Plan of Actions & Milestones, specifying the remedial actions to be taken to ensure that applicable testing and review of the existing requisition-processing systems are conducted on a periodic basis. (2) Direct the Under Secretary of Defense for Policy, in conjunction with the Secretaries of the Army, the Air Force, and the Navy, to determine whether current plans for developing the Case Execution Management Information System call for periodic testing and, if not, provide for such testing. (3) Direct the Under Secretary of Defense for Policy, in conjunction with the Secretary of the Navy, and direct the Secretary of the Air Force to determine if it would be beneficial to modify the Navy’s and the Air Force’s requisition-processing systems so that the systems reject requisitions for classified or controlled parts that foreign countries make under blanket orders and preclude country managers from manually overriding system decisions, and to modify their systems as appropriate. To improve the control of government-furnished material shipped to Navy repair contractors, GAO recommended that the Secretary of Defense direct the Secretary of the Navy to instruct the Commander, Naval Inventory Control Point, to implement the following three actions: (1) Require Navy repair contractors to acknowledge receipt of material that is received from the Navy’s supply system as prescribed by DOD procedure. (2) Follow up on unconfirmed material receipts within the 45 days as prescribed in the DOD internal control procedures to ensure that the Naval Inventory Control Point can reconcile material shipped to and received by its repair contractors. (3) Implement procedures to ensure that quarterly reports of all shipments of government-furnished material to Navy repair contractors are generated and distributed to the Defense Contract Management Agency. To address the inventory management shortcomings that GAO identified, GAO recommended that the Secretary of Defense take the following three actions: (1) Direct the military services and the Defense Logistics Agency to determine whether it would be beneficial to use the actual storage cost data provided by Defense Logistics Agency in their computations, instead of using estimated storage costs, and include that data in their systems and models as appropriate; (2) Direct the Secretary of the Air Force to establish and implement a systemwide process for correcting causes of inventory discrepancies between the inventory for which item managers are accountable and the inventory reported by bases and repair centers; and (3) Direct the Secretary of the Air Force to revise its policy to require item managers to code inventory so that the inventory is properly categorized. To improve internal controls over the Navy’s foreign military sales program and to prevent foreign countries from obtaining classified and controlled spare parts under blanket orders, GAO recommended that the Secretary of Defense instruct the Secretary of the Navy to take the following six actions: (1) Consult with the appropriate officials to resolve the conflict between the DOD and Navy policies on the Navy’s use of waivers allowing foreign countries to obtain classified spare parts under blanket orders. (2) Determine and implement the necessary changes required to prevent the current system from erroneously approving blanket order requisitions for classified spare parts until the new system is deployed. (3) Establish policies and procedures for the Navy’s country managers to follow when documenting their decisions to override the system when manually processing blanket order requisitions. (4) Require that the Navy’s country managers manually enter blanket order requisitions into the Navy’s system to correctly represent foreign-country-initiated orders versus U.S. government-initiated orders so the Navy’s system will validate whether the foreign countries are eligible to receive the requested spare parts. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented (5) Establish policies and procedures to follow for blanket orders when the Navy’s country managers replace spare parts requested by manufacturer or vendor part numbers with corresponding government national stock numbers. (6) Establish interim policies and procedures, after consulting with appropriate government officials, for recovering classified or controlled spare parts shipped to foreign countries that might not have been eligible to receive them under blanket orders until the Defense Security Cooperation Agency develops guidance on this issue. To improve the Navy system’s internal controls aimed at preventing foreign countries from obtaining classified and controlled spare parts under blanket orders, GAO recommended that the Secretary of Defense direct the Under Secretary of Defense for Policy to require the appropriate officials to take the following two actions: (7) Modify the Navy’s system to revalidate blanket order requisitions when the Navy’s country manager replaces spare parts that are requested by manufacturer or vendor part numbers. (8) Periodically test the system to ensure that it is accurately reviewing blanket order requisitions before approving them. To improve internal controls over the Army’s foreign military sales program and to prevent foreign countries from being able to obtain classified spare parts or unclassified items containing military technology that they are not eligible to receive under blanket orders, GAO recommended that the Secretary of Defense instruct the Secretary of the Army to take the following two actions: (1) Modify existing policies and procedures, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of classified spare parts that have been shipped to foreign countries that may not be eligible to receive them under blanket orders. (2) Modify existing policies and procedures covering items, after consultation with the appropriate government officials, to cover items shipped in lieu of items ordered to also ensure the recovery of unclassified items containing military technology that have been shipped to foreign countries that may not be eligible to receive them under blanket orders. To improve the Army system’s internal controls aimed at preventing foreign countries from obtaining classified spare parts or unclassified items containing military technology under blanket orders, GAO recommended that the Secretary of Defense direct the Under Secretary of Defense for Policy to require the appropriate officials to take the following two actions: (3) Modify the system so that it identifies blanket order requisitions for unclassified items containing military technology that should be reviewed before they are released. (4) Periodically test the system and its logic for restricting requisitions to ensure that the system is accurately reviewing and approving blanket order requisitions. In order to improve supply availability, enhance operations and mission readiness, and reduce operating costs for deployed ships, GAO recommended the Secretary of Defense direct the Secretary of the Navy to: (1) Develop plans to conduct periodic ship configuration audits and to ensure that configuration records are updated and maintained in order that accurate inventory data can be developed for deployed ships; (2) Ensure that demand data for parts entered into ship supply systems are recorded promptly and accurately as required to ensure that onboard ship inventories reflect current usage or demands; (3) Periodically identify and purge spare parts from ship inventories to reduce costs when parts have not been requisitioned for long periods of time and are not needed according to current and accurate configuration and parts demand information; and (4) Ensure that casualty reports are issued consistent with high priority maintenance work orders, as required by Navy instruction, to provide a more complete assessment of ship’s readiness. To improve the supply availability of critical readiness degrading spare parts that may improve the overall readiness posture of the military services, GAO recommended that the Secretary of Defense direct the Director of the Defense Logistics Agency to: (1) Submit, as appropriate, requests for waiver(s) of the provisions of the DOD Supply Chain Materiel Management Regulation 4140.1-R that limit the safety level of supply parts to specific demand levels. Such waivers would allow Defense Logistics Agency to buy sufficient critical spare parts that affect readiness of service weapon systems to attain an 85 percent minimum availability goal; (2) Change the agency’s current aggregate 85 percent supply availability goal for critical spare parts that affect readiness, to a minimum 85 percent supply availability goal for each critical spare part, and because of the long lead times in acquiring certain critical parts, establish annual performance targets for achieving the 85 percent minimum goal; and (3) Prioritize funding as necessary to achieve the annual performance targets and ultimately the 85 percent minimum supply availability goal. To improve internal controls over the Air Force’s foreign military sales program and to minimize countries’ abilities to obtain classified or controlled spare parts under blanket orders for which they are not eligible, GAO recommended that the Secretary of Defense instruct the Secretary of the Air Force to require the appropriate officials to take the following steps: (1) Modify the Security Assistance Management Information System so that it validates country requisitions based on the requisitioned item’s complete national stock number. (2) Establish policies and procedures for recovering classified or controlled items that are erroneously shipped. (3) Establish polices and procedures for validating modifications made to the Security Assistance Management Information System to ensure that the changes were properly made. (4) Periodically test the Security Assistance Management Information System to ensure that the system’s logic for restricting requisitions is working correctly. (5) Establish a policy for command country managers to document the basis for their decisions to override Security Assistance Management Information System or foreign military sales case manager recommendations. GAO recommended that the Secretary of Defense direct the Secretary of the Navy to: (1) Develop a framework for mitigating critical spare parts shortages that includes long-term goals; measurable, outcome-related objectives; implementation goals; and performance measures as a part of either the Navy Sea Enterprise strategy or the Naval Supply Systems Command Strategic Plan, which will provide a basis for management to assess the extent to which ongoing and planned initiatives will contribute to the mitigation of critical spare parts shortages; and (2) Implement the Office of the Secretary of Defense’s recommendation to report, as part of budget requests, the impact of funding on individual weapon system readiness with a specific milestone for completion. In order to improve the department’s logistics strategic plan to achieve results for overcoming spare parts shortages, improve readiness, and address the long-standing weaknesses that are limiting the overall economy and efficiency of logistics operations, GAO recommended that the Secretary of Defense direct the Under Secretary for Acquisition, Technology, and Logistics to: (1) Incorporate clear goals, objectives, and performance measures pertaining to mitigating spare parts shortages in the Future Logistics Enterprise or appropriate agencywide initiatives to include efforts recommended by the Under Secretary of Defense, Comptroller in his August 2002 study report. GAO also recommended that the Secretary of Defense direct the Under Secretary of Defense, Comptroller to (2) Establish reporting milestones and define how it will measure progress in implementing the August 2002 Inventory Management Study recommendations related to mitigating critical spare parts shortages. GAO recommended that the Secretary of Defense direct the Secretary of the Air Force to take the following steps: (1) Incorporate the Air Force Strategic Plan’s performance measures and targets into the subordinate Logistics Support Plan and the Supply Strategic Plan. (2) Commit to start those remaining initiatives needed to address the causes of spare parts shortages or clearly identify how the initiatives have been incorporated into those initiatives already underway. (3) Adopt performance measures and targets for its initiatives that will show how their implementation will affect critical spare parts availability and readiness. (4) Direct the new Innovation and Transformation Directorate to establish plans and priorities for improving management of logistics initiatives consistent with the Air Force Strategic Plan. (5) Request spare parts funds in the Air Force’s budget consistent with results of its spare parts requirements determination process. GAO recommended that the Secretary of Defense direct the Secretary of the Army to: (1) Modify or supplement the Transformation Campaign Plan, or the Army-wide logistics initiatives to include a focus on mitigating critical spare parts shortages with goals, objectives, milestones, and quantifiable performance measures, such as supply availability and readiness-related outcomes and (2) Implement the Office of Secretary of Defense recommendation to report, as part of budget requests, the impact of additional spare parts funding on equipment readiness with specific milestones for completion. Defense Inventory: Overall Inventory and Requirements Are Increasing, but Some Reductions in Navy Requirements Are Possible (GA0-03-355, May 8, 2003) To improve the accuracy of the Navy’s secondary inventory requirements, GAO recommended that the Secretary of Defense direct the Secretary of the Navy to require the Commander, Naval Supply Systems Command, to require its inventory managers to use the most current data available for computing administrative lead time requirements. Given the importance of spare parts to maintaining force readiness, and as justification for future budget requests, actual and complete information would be important to DOD as well as Congress. Therefore, GAO recommended that the Secretary of Defense: (1) Issue additional guidance on how the services are to identify, compile, and report on actual and complete spare parts spending information, including supplemental funding, in total and by commodity, as specified by Exhibit OP-31 and (2) Direct the Secretaries of the military departments to comply with Exhibit OP-31 reporting guidance to ensure that complete information is provided to Congress on the quantities of spare parts purchased and explanations of deviations between programmed and actual spending. GAO recommended that the Secretary of Defense establish a direct link between the munitions needs of the combatant commands—recognizing the impact of weapons systems and munitions preferred or expected to be employed—and the munitions requirements determinations and purchasing decisions made by the military services. Defense Inventory: Improved Industrial Base Assessment for Army War Reserve Spares Could Save Money (GA0-02- 650, July 12, 2002) In order to improve the Army’s readiness for wartime operations, achieve greater economy in purchasing decisions, and provide Congress with accurate budget submissions for war reserve spare parts, GAO recommended that the Secretary of Defense direct the Secretary of the Army to have the Commander of Army Material Command take the following actions to expand or change its current process consistent with the attributes in this report: (1) Establish an overarching industrial base capability assessment process that considers the attributes in this report. (2) Develop a method to efficiently collect current industrial base capability data directly from industry itself. (3) Create analytical tools that identify potential production capability problems such as those due to surge in wartime spare parts demand. (4) Create management strategies for resolving spare parts availability problems, for example, by changing acquisition procedures or by targeting investments in material and technology resources to reduce production lead times. To improve the control of inventory being shipped, GAO recommended that the Secretary of Defense direct the Secretary of the Air Force to undertake the following: Improve processes for providing contractor access to government-furnished material by: (1) Listing specific stock numbers and quantities of material in repair contracts (as they are modified or newly written) that the inventory control points have agreed to furnish to contractors. (2) Demonstrating that automated internal control systems for loading and screening stock numbers and quantities against contractor requisitions perform as designed. (3) Loading stock numbers and quantities that the inventory control points have agreed to furnish to contractors into the control systems manually until the automated systems have been shown to perform as designed. (4) Requiring that waivers to loading stock numbers and quantities manually are adequately justified and documented based on cost-effective and/or mission-critical needs. Revise Air Force supply procedures to include explicit responsibility and accountability for: (5) Generating quarterly reports of all shipments of Air Force material to contractors. (6) Distributing the reports to Defense Contract Management Agency property administrators. (7) Determine, for the contractors in our review, what actions are needed to correct problems in posting material receipts. (8) Determine, for the contractors in our review, what actions are needed to correct problems in reporting shipment discrepancies. (9) Establish interim procedures to reconcile records of material shipped to contractors with records of material received by them, until the Air Force completes the transition to its Commercial Asset Visibility system in fiscal year 2004. (10) Comply with existing procedures to request, collect, and analyze contractor shipment discrepancy data to reduce the vulnerability of shipped inventory to undetected loss, misplacement, or theft. For all programs, GAO recommended that the Secretary of Defense direct the Director of the Defense Logistics Agency to take the following actions: (1) As part of the department’s redesign of its activity code database, establish codes that identify the type of excess property—by federal supply class—and the quantity that each special program is eligible to obtain and provide accountable program officers access to appropriate information to identify any inconsistencies between what was approved and what was received. (2) Reiterate policy stressing that Defense reutilization facility staff must notify special program officials of the specific tracking and handling requirements of hazardous items and items with military technology/applications. Concurred, closed, implemented Nonconcurred, closed, not implemented Partially concurred, closed, not implemented Partially concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, implemented Partially concurred, closed, not implemented Partially concurred, closed, implemented Concurred, closed, implemented GAO also recommended that the Secretary of Defense ensure that accountable program officers within the department verify, prior to approving the issuance of excess property, the eligibility of special programs to obtain specific types and amounts of property, including items that are hazardous or have military technology/applications. This could be accomplished, in part, through the department’s ongoing redesign of its activity code database. For each individual program, GAO further recommended the following: (1) With regard to the 12th Congressional Regional Equipment Center, that the Secretary of Defense direct the Director of the Defense Logistics Agency to review and amend, as necessary, its agreement with the Center in the following areas: (a) The Center’s financial responsibility for the cost of shipping excess property obtained under the experimental project, (b) The ancillary items the Center is eligible to receive, (c) The rules concerning the sale of property and procedures for the Center to notify the Agency of all proposed sales of excess property, (d) The Center’s responsibility for tracking items having military technology/application and hazardous items, and (e) The need for Agency approval of the Center’s orders for excess property. (2) With regard to the Army, the Navy, and the Air Force Military Affiliate Radio Systems, GAO recommended that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to have the Joint Staff Directorate for Command, Control, Communications, and Computer Systems review which items these systems are eligible to receive, on the basis of their mission and needs, and direct each of the Military Affiliate Radio Systems to accurately track excess property, including pilferable items, items with military technology/ applications, and hazardous items. (3) With regard to the Civil Air Patrol, GAO recommended that the Secretary of Defense direct the Secretary of the Air Force to have the Civil Air Patrol-Air Force review which items the Patrol is eligible to receive, on the basis of its mission and needs, and direct the Patrol to accurately track its excess property, including pilferable items, items with military technology/applications, and hazardous items. To provide the military services, the Defense Logistics Agency, and the U.S. Transportation Command with a framework for developing a departmentwide approach to logistics reengineering, GAO recommended that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to revise the departmentwide Logistics Strategic Plan to provide for an overarching logistics strategy that will guide the components’ logistics planning efforts. Among other things, this logistics strategy should: (1) Specify a comprehensive approach that addresses the logistics life-cycle process from acquisition through support and system disposal, including the manner in which logistics is to be considered in the system and equipment acquisition process and how key support activities such as procurement, transportation, storage, maintenance, and disposal will be accomplished. (2) Identify the logistics requirements the department will have to fulfill, how it will be organized to fulfill these requirements, and who will be responsible for providing specific types of logistics support. (3) Identify the numbers and types of logistics facilities and personnel the department will need to support future logistics requirements. (4) GAO also recommended that the Under Secretary of Defense for Acquisition, Technology, and Logistics establish a mechanism for monitoring the extent to which the components are implementing the department’s Logistics Strategic Plan. Specifically, the Under Secretary of Defense for Acquisition, Technology, and Logistics should monitor the extent to which the components’ implementation plans are (a) consistent with the departmentwide plan, (b) directly related to the departmentwide plan and to each other, and (c) contain appropriate key management elements, such as performance measures and specific milestones. Prepare quarterly statistic reports quantifying the cost effectiveness of the special program requirement initiative to reduce or cancel procurement actions by the use of adjusted buy-back rates, segregated by Defense Supply Centers. A.1. Transmit shipment notification transactions to the Defense Reutilization and Marketing Service when materiel is shipped to the Defense Reutilization and Marketing Office and ensure the data in the shipment notification are accurate. A.2. Review and research Defense Reutilization and Marketing Service follow-up transactions for materiel reported as shipped but not received, and respond to the Defense Reutilization and Marketing Service follow-up transactions in a timely manner. B. Establish controls to ensure that Navy organizations either demilitarize materiel or provide demilitarization instructions to the Defense Logistics Agency Depots, prior to requesting the depot ship materiel to disposal, and respond to depot requests for demilitarization instructions in a timely manner. C. Validate that the Realtime Reutilization Asset Management Program Office reprograms its computer system to ensure that disposal shipment notifications, rather than disposal shipment confirmations, are sent to Defense Reutilization and Marketing Service for disposal shipments. D. Request that the Defense Reutilization and Marketing Service provide management reports which identify Navy organizations that are not responding to disposal follow-up transactions for materiel reported as shipped but not received and that are not sending disposal shipment notifications for materiel shipped to disposal. A. Establish controls to ensure that Defense Distribution Depot personnel request the required demilitarization instructions for all materiel awaiting disposal instructions and reverse the disposal transactions if the required instructions are not received. B. Establish controls to ensure that the Defense Reutilization and Marketing Service reviews and analyzes management data to identify Navy organizations that are not routinely preparing shipment disposal notifications or are not routinely responding to follow-up transactions and identify to the Naval Supply Systems Command potential problems with data in the in-transit control system in order for the Naval Supply Systems Command to ensure that Navy organizations comply with disposal procedures. The Commanding General, Marine Corps Logistics Command should: 1. Identify all excess materiel and return the materiel to the supply system, as required by Marine Corps Order P4400.151B, “Intermediate-Level Supply Management Policy Manual,” July 9, 1992. 2. Perform physical inventories of all materiel in all storage locations and adjust inventory records accordingly. The Director, Defense Logistics Agency should: 1. Reevaluate the cost categories for determining the average annual cost for maintaining an inactive national stock number item in the Defense Logistics Agency supply system and recalculate the average annual cost consistent with other pricing and cost methodologies. 2. Discontinue application of the draft Defense Logistics Agency Office of Operations Research and Resource Analysis report, “Cost of a DLA Maintained Inactive National Stock Number,” July 2002, to any authorized programs of DOD or the Defense Logistics Agency until all applicable cost categories are fully evaluated and the applicable costs of those relevant categories are incorporated into the cost study. A. Identify the circumstances or conditions under which other nonrecurring requirements are authorized for processing. B. Identify the requirements for documenting the methodology and rationale for using other nonrecurring requirement transactions. C. Establish requirements for identifying the supply center personnel who enter other nonrecurring requirements in the Defense Logistics Agency supply system and retaining other nonrecurring requirement records after the support dates have passed. Establish a timeline for the Defense supply centers to validate outstanding other nonrecurring requirement transactions in the Defense Logistics Agency supply system. Other nonrecurring requirement transactions that do not have sufficient supporting documentation or that cannot be validated should be canceled or reduced and reported to the Defense Logistics Agency. The report should include the total number of other nonrecurring requirement transactions that were deleted and the dollar value of procurement actions that were canceled as a result. The Commander, Ogden Air Logistics Center should immediately: 1. Comply with the guidance in Air Force Manual 23-110, “U.S. Air Force Supply Manual,” and Air Force Materiel Command Instruction 21-130, “Equipment Maintenance Materiel Control,” regarding the management of maintenance materiel stored at the Air Logistics Center. 2. Perform an annual physical inventory of all materiel recorded in the D035K Wholesale and Retail and Shipping System that is the responsibility of the Maintenance Directorate, reconcile the results, and turn in excess materiel to supply. 3. Perform a physical count of all materiel located on the maintenance shop floors and in storage areas to identify unaccountable and excess materiel, reconcile the physical count to the D035K Wholesale and Retail and Shipping System, and turn in excess materiel to supply. 4. Complete the review of courtesy storage materiel listed in the materiel processing system and either turn in the excess to supply, move to the D035K Wholesale and Retail and Shipping System, or dispose of the materiel. A. Expedite funding and the deployment of the Commercial Asset Visibility system to Army commercial repair facilities. Funding and deployment should be prioritized based primarily on the dollar value of repairable assets at the commercial repair facilities. B. Perform oversight of compliance with DoD 4000.25-2-M, “Military Standard Transaction Reporting and Accounting Procedures,” March 28, 2002, to conduct annual location reconciliations between inventory control point records and storage depot records. A. Determine whether the items with inventory records that were adjusted as a result of the October 2002 reconciliation between the Communications-Electronics Command and the Defense Depot Tobyhanna Pennsylvania are obsolete or excess to requirements. That determination should be made before requesting special inventories or performing other costly causative research procedures. B. Dispose of those assets that are identified as obsolete or excess to projected requirements. A. Develop in-house procedures to provide management information reports to the inventory accuracy officer, comparable to the management information reports required in the February 2003 contract awarded to Resources Consultant Incorporated, to assist in reducing in-transit inventory. B. Establish controls to ensure that all in-transit items that meet the criteria in Naval Supply Systems Command Publication 723, “Navy Inventory Integrity Procedures,” April 19, 2000, are reviewed prior to writing them off as an inventory loss. The Commander, Warner Robins Air Logistics Center should immediately: 1. Comply with Air Force guidance regarding the management of maintenance materiel stored at the Air Logistics Center. 2. Issue guidance regarding materiel management reports for management review. 3. Perform an annual physical inventory of all materiel recorded in the D035K Wholesale and Retail and Shipping System that is the responsibility of the Maintenance Directorate, reconcile the results, and turn in excess materiel to supply. 4. Perform a physical count of all materiel located on the maintenance shop floors and in storerooms, reconcile the physical count to the D035K Wholesale and Retail and Shipping System, and turn in excess materiel to supply. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 5. Update or complete Air Force Materiel Command Form 100 for each line of floating stock and spares inventory. Submit to the floating stock and spares monitor for processing those forms in which the authorization level changes. 6. Perform semi-annual reviews of materiel stored in the courtesy storage area and turn in excess materiel to supply. 7. Perform quarterly reviews of bench stock materiel in the Low Altitude Navigation and Targeting Infrared for Night shop of the Avionics Division and turn in excess materiel to supply. A. Enforce the requirements of Naval Air Systems Command Instruction 4400.5A to identify excess materiel that has been inactive for more than 270 days for routine use materiel and 12 months for long lead-time or low demand materiel. B. Require quarterly reporting of excess of materiel at Naval Air Depots to ensure excess materiel does not accumulate. C. Develop policy for point of use inventory. A. Perform physical inventories of materiel stored in all storage locations and adjust inventory records accordingly. B. Perform the required quarterly reviews of materiel stored in maintenance storerooms to determine whether valid requirements exist for the materiel. C. Identify all excess materiel stored in maintenance storerooms and return the materiel to the supply system. A. Comply with Navy guidance regarding the storage of maintenance materiel at the depot, performance of quarterly reviews of maintenance materiel on hand, and submission of management reports for review. B. Develop and implement an effective management control program. A. Inventory materiel stored in work center storerooms, record all of the on-hand materiel on accountable records, identify the materiel for which a valid need exists, and return the items with no known requirement to the supply system. B. Review jobs at closeout to determine whether a need exists for leftover materiel. Leftover, unneeded materiel should be made visible to item managers and disposed of in a timely manner. C. Perform the required quarterly reviews of materiel stored in work center storerooms to determine whether valid requirements exist for the materiel. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented D. Perform physical inventories of materiel stored in all storage locations and adjust inventory records accordingly. A. Comply with the Defense Logistics Agency Manual 4140.2 requirement that Defense Logistics Agency item managers contact the supply center monitor for the weapon system support program to coordinate the deletion of the code that identifies the national stock number item as a weapon system item. B. Comply with the Defense Logistics Agency Manual 4140.3 requirement that the supply center monitor for the weapon system support program notify the Military Departments when a national stock number item supporting a weapon system is to be deleted from the supply system as a result of the Defense Inactive Item Program process. Determine the most efficient and cost-effective method to reinstate national stock number items that were inappropriately deleted from the supply system. A. Review the revised procedures for processing Defense Inactive Item Program transactions when the FY 2002 process is complete to ensure the procedures are working as intended and that inactive item review notifications are being promptly returned to the Defense Logistics Agency. B. Establish controls to ensure that inactive item review notifications are reviewed by the user and are returned to the Defense Logistics Agency before an automatic retain notification is provided to the Defense Logistics Agency. C. Establish controls to review Defense Logistics Agency transactions deleting national stock numbers from Air Force systems so that the inappropriate deletion of required data from the Air Force supply system is prevented. A. Describe the factors to be used by the Military Departments and supply centers to evaluate the validity of potential candidates for additive investment. B. Require that additive safety level requirements be based on consistent and up-to-date supply availability data. C. Require regular reviews to determine whether additive safety levels continue to be appropriate. Establish a frequency for when and how often reviews should be made and the criteria for making necessary safety level adjustments and reinvesting funds. D. Establish a method for maintaining safety level increases that adheres to the DoD safety level limitation while recognizing and adjusting to changes in the supply system. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented E. Establish a time frame for continuous program evaluation and a resolution process that includes a flag or general officer from each Military Department whenever problem elevation is needed. Approve and coordinate with the Military Departments the revised implementation plan. A. Revise Defense Logistics Agency Manual 4140.2, “Supply Operations Manual,” July 1, 1999, to include terminal national stock number items with registered users in the Defense Inactive Item Program. B. Maintain and report statistics on how many terminal national stock number items are deleted from the supply system after the North Atlantic Treaty Organization and foreign governments review the items. Establish controls to ensure that the Navy is removed as a registered user of Defense Logistics Agency-managed national stock number items that are no longer required. A. Discontinue the use of the market basket approach to determine which bench-stock items are placed on the industrial prime vendor contract. Instead, evaluate each item separately and select the most economical source to supply material. B. Review inventory levels and discontinue placing items on the industrial prime vendor contract with more than 3 years of inventory. C. Take appropriate action in accordance with contract terms to remove items with more than 3 years of inventory and start using existing depot inventories as the first choice to fill contract demand. Convene a performance improvement team composed of representatives from all relevant stakeholders, including appropriate oversight agencies, to plan and execute a reengineered best value approach to manage bench-stock material for all customers that addresses competition and restriction on contract bundling. B. The Commander, Defense Supply Center Philadelphia should: 1. Implement procedures to ensure that future spot buy material procurements are priced and paid for in accordance with the terms of the contract. 2. Obtain a full refund from the Science Application International Corporation for erroneous charges, including lost interest, and take appropriate steps to reimburse the air logistics centers for the full amount of the contract overcharges. Direct the Corpus Christi Army Depot to comply with Army guidance regarding the storage of maintenance materiel at the depot and the preparation and submission of management reports for review. A. Price the materiel stored in the Automated Storage and Retrieval System that has no extended dollar value or that has been added to the physical inventory, and identify the value of inventory excess to prevailing requirements. B. Inventory materiel stored in work centers on the maintenance shop floors, record the materiel on accountable records, identify the materiel for which a valid need exists, and turn in or transfer to other programs excess materiel. C. Perform an annual physical inventory of all of the materiel stored in the Automated Storage and Retrieval System. D. Perform the required quarterly reviews of materiel stored in the Automated Storage and Retrieval System to determine if valid requirements exist for the stored materiel. E. Review projects at the 50-percent, 75-percent, and 90- percent completion stages to determine if a need exists for materiel in storage. F. Perform a reconciliation between the Automated Storage and Retrieval System and Maintenance Shop Floor System files, at a minimum monthly, to determine if files are accurate. A physical inventory should be performed to correct any deficiencies. 2. (G) The Commander, Corpus Christi Army Depot should immediately prepare and submit the following report to management for review: 1. A monthly total dollar value for materiel stored in the Automated Storage and Retrieval System. 2. Items stored in the Automated Storage and Retrieval System with no demand in the last 180 days. 3. Materiel stored in the Automated Storage and Retrieval System against closed program control numbers. 4. Materiel stored against overhead program control numbers. 5. Potential excess materiel by program control number. A. The Commander, U.S. Forces Korea should: 1. Establish guidance for delivery of cargo from ports of debarkation within the theater using Uniform Materiel Movement and Issue Priority System standards or U.S. Forces Korea supplemental standards to the Uniform Materiel Movement and Issue Priority System criteria more applicable to theater requirements. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Partially concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred , closed, implemented Concurred, open 2. Establish procedures for using and maintaining documentation that provides evidence of delivery times and the accuracy of the delivered cargo. 3. Prepare or amend commercial carrier contracts that contain delivery provisions for weekend and holiday deliveries, and penalties for nonperformance compliance with the standards established by the provisions of Recommendation A.1. 4. Establish procedures to ensure that the priority of the cargo to be delivered from a port of debarkation is matched with a commercial carrier contract that has the necessary provisions that will ensure delivery within the standards established by Recommendation A.1. 5. Establish procedures, metrics, and surveillance plans that will monitor and ensure carrier performance of contract specifications and reconcile movement control documents received from commercial carriers to ensure consignees received prompt and accurate delivery of all cargo. B. The Commander, U.S. Forces Korea should revise U.S. Forces Korea Regulation 55-355 to require: 1. Supply Support Activities to maintain dated and signed truck manifests and pickup sheets to confirm receipt. 2. Supply Support Activities immediately contact end users for pickup of high priority cargo within the same day the cargo is made available for end user. The Director, Defense Logistics Agency should: 1. Revise Defense Logistics Agency Manual 4140.2, “Supply Operations Manual,” July 1, 1999, to include terminal national stock number items with no registered users in the Defense Inactive Item Program last user withdrawn process. 2. Maintain and report statistics on how many terminal national stock number items are deleted from the supply system after the North Atlantic Treaty Organization and foreign governments review the items. Ensure that the Joint Total Asset Visibility Program is funded until sufficient operational capabilities of the Global Combat Support System have been fielded and can provide capabilities that are at least equivalent to the existing Joint Total Asset Visibility Program. The Deputy Under Secretary of Defense (Logistics and Materiel Readiness) should: 1. Evaluate the usefulness of the DoD Total Asset Visibility performance measure. 2. Issue specific, written, performance measure guidance that standardizes and clarifies the required data elements for the Total Asset Visibility measure consistent with the evaluation of the usefulness of the measure. 3. Establish and institutionalize a process to evaluate and verify data submitted by DoD Components for the Total Asset Visibility performance measure, consistent with the evaluation of the usefulness of the measure. Reassess guidance regarding the 60-day storage and requisitioning of fabrication materiel at maintenance depots and revise Army Regulation 750-2. The guidance should state the following: the appropriate number of days depots should be allowed for storing and requisitioning fabrication materiel. quarterly reviews should be performed to determine if materiel is still required. Issue guidance regarding management of the Automated Storage and Retrieval System at Tobyhanna. The guidance should include the following: all materiel stored in the Automated Storage and Retrieval System shall be, at a minimum, identified by owning cost center; national stock number/part number; program control number; quantity; acquisition source code; nomenclature; and condition code. a review of any materiel with a date of last activity more than 6 months shall be performed. an annual physical inventory of any materiel stored in the Automated Storage and Retrieval System shall be performed. items stored in mission stocks must represent a bona fide potential requirement for performance of a maintenance or fabrication requirement. availability of materiel from previously completed fabrication orders must be determined before placing new requisitions. projects shall be reviewed at the 50 percent, 75 percent, and 90 percent completion stages to determine if a need exists for materiel still in storage. reclaimed materiel, materiel removed from assets in maintenance, and work in process may be stored until reutilized on the maintenance program. Excess reclaimed materiel shall be turned in or transferred to a valid funded program. materiel shall not be stored in Automated Storage and Retrieval System in an overhead account. quarterly reviews shall be performed on materiel stored in the Automated Storage and Retrieval System to determine if requirements still exist. prior to closing a depot maintenance program, any associated remaining repair parts, spares, and materiel on hand shall be transferred to an ongoing program or a program that will begin within 180 days or turned in to the installation supply support activity within 15 days. The gaining program must be funded, open, and valid. The transferred materiel must be a bona fide potential requirement of the gaining program. A.3. The Commander, Communications-Electronics Command should direct Tobyhanna to immediately: a. Price the materiel stored in the Automated Storage and Retrieval System that has no extended dollar value or that has been added to the physical inventory, identify the value of inventory excess to prevailing requirements, and notify the Inspector General, DoD, of the corrected dollar value of the inventory and value of inventory excess to the requirements. b. Limit the storage of materiel in the Automated Storage and Retrieval System under overhead accounts. Specifically, remove materiel obtained from the Sacramento Air Logistic Center from the overhead account program control numbers. c. Record the Tactical Army Combat Computer System equipment on accountable records and inventory and turn in the computer equipment to the supply system because no requirement for the equipment exists at Tobyhanna. Issue guidance regarding reports that should be submitted to management for review. The guidance should require the following reports: an annual physical inventory of all materiel stored in Automated Storage and Retrieval System. a reconciliation between the Automated Storage and Retrieval System and Maintenance Shop Floor System files, at a minimum monthly, to determine if files are accurate. a physical inventory should be performed to correct any deficiencies. Reports should be prepared for management review. a monthly total dollar value for materiel stored in the Automated Storage and Retrieval System. items stored in the Automated Storage and Retrieval System with no demand in the last 180 days. materiel stored in the Automated Storage and Retrieval System against closed program control numbers. materiel stored against overhead program control numbers. potential excess materiel by program control number. Direct the Tobyhanna Army Depot to immediately perform a physical inventory and reconcile the Automated Storage and Retrieval System records with the Maintenance Shop Floor System records to verify the accuracy of inventory records and submit report for review. A-1. Include placement of stocks (malpositioned) as part of the Army Pre-positioned Stocks program performance metrics. As a minimum: clearly define malpositioned stocks and establish procedures for calculating the data to minimize inconsistency or data misrepresentation reported by the subordinate activities. establish long-term goals for correcting the problems and annually monitor the progress in meeting the goals to ensure the situation doesn’t deteriorate. examine the feasibility of correcting the Web Logistics Integrated Database limitations and shortfalls identified within this report so the system can be used to produce reliable performance data. A-2. Improve shelf-life management controls and oversight. As a minimum: develop stock rotation plans for items in long-term storage outside Continental U.S. or remove the items from outside Continental U.S. storage. prepare an annual list of all Army Pre-positioned Stocks items due to expire within 12 and 24 months and have U.S. Army Field Support Command ensure stock rotation plans are adequate to minimize expired assets. Use the data to formulate funding requirements for test and inspection. use critical data fields within information management systems to assist in shelf-life stock rotations. Require U.S. Army Field Support Command to monitor shelf-life data—such as dates of manufacture and expiration dates—provided by its Army Pre- positioned Stocks sites to ensure it is current and complete. Perform quarterly reconciliations. include shelf-life management metrics as part of the Army Pre- positioned Stocks program performance assessment. Establish goals and develop methods to track and minimize the loss of items due to the expired shelf-life. Concurred, open A-3. Strengthen accountability controls and enhance data integrity, reliability, and visibility of pre-positioned stocks. Specifically: require U.S. Army Communications-Electronics Life Cycle Management Command and U.S. Army Tank-automotive and Armaments Life Cycle Management Command to incorporate controls similar to U.S. Army Aviation and Missile Life Cycle Management Command that will identify and track unauthorized transactions—that is, situations where the ownership purpose code of an item was changed from a war reserve purpose code to a general issue code without first receiving approval from Army Pre-positioned Stocks personnel. execute the required steps to place data associated with loan transactions onto the Army knowledge online account to facilitate oversight of loan transactions. numerically sequence each approved request and use the number to cross-reference back to the approved request. include all Open Army Pre-positioned Stocks loan transactions issued to item managers that weren’t paid back as part of the Army Pre-positioned Stocks program performance assessment. require U.S. Army Communications-Electronics Life Cycle Management Command and U.S. Army Tank-automotive and Armaments Life Cycle Management Command to track the paybacks by establishing a scheduled payback target date so they can be proactive in pursuing collections. track inventory loss adjustment statistics as a potential source for benchmarking progress on reducing repetitive errors and identifying performance problems. establish dollar values for supply class VII inventory adjustments in Logistics Modernization Program so loss adjustments meeting the causative research criteria are researched. randomly sample 25 percent of the inventory loss adjustment transactions to verify the adjustments are supported by evidence of documented causative research and an adequate explanation is documented. A-4. Track Army Pre-positioned Stocks site weekly data reconciliations to evaluate performance and data reliability. For the Commander, 10th Mountain Division (Light Infantry) A-1. Provide unit commanders with a block of instructions that explain the process and importance of accurately accounting for assets and maintaining the property book. A-2. Establish a reminder system to notify gaining and losing units when equipment transfers occur. A-3. Develop and distribute guidance to operations personnel stressing the need to follow established procedures for accounting for assets and the importance of providing necessary documentation to property book officers. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented A-4. Research each discrepancy with equipment transfers and turn-in documents and make appropriate adjustments to the property book records for the 1st and 2nd BCTs. If the missing vehicles can’t be located in a reasonable time period, initiate an AR 15-6 investigation and, if warranted, take further appropriate action. B-1. Research the discrepancies we found with the 1st, 2nd, and 3rd BCT vehicles and make appropriate adjustments to the respective property books. For the Commander, U.S. Army Aviation and Missile Life Cycle Management Command 1. Require: item managers to consider historical procurement data in the Master Data Record’s Sector 10 when justifying values they enter for the Requirements System to use as representative estimates of procurement lead time. integrated Materiel Management Center second-level supervisors to review and explicitly approve the procurement lead time values entered into the Master Data Record by item managers. 2. Require contract specialists to adhere to Army and Aviation and Missile Life Cycle Management Command guidance on considering the extent of delay in awarding procurements to vendors when justifying if a procurement should be identified as a representative estimate of a future procurement’s administrative lead time. A-1. Initiate DA staff action to withhold funding for increasing safety levels until Army Materiel Command develops test procedures and identifies key performance indicators to measure and assess the cost-effectiveness and impact on operational readiness. For the Commander, Defense Supply Center Philadelphia 1. Monitor the contractor’s progress to ensure the contractor completes the reorganization of the bulk storage warehouses with a location grid plan and subsequent warehousing of operational rations with specific location areas in the warehouses. Then ensure contractor records updated locations of these rations in the warehouse management system database to ensure physical location of products match the database. 2. Complete and implement the software change package to ensure operational rations containing more than one national stock number are allocated from inventory based on the first-to-expire inventory method. Concurred, closed, implemented Concurred, closed, implemented 3. Develop and implement guidance for the contractor regarding the requirements for the destruction of government-owned operational rations which have been deemed unfit for human consumption. Require the contracting officer representative to certify the destruction certification package only when adequate documentation is attached to support the operational rations being destroyed. Also, require the contracting officer representative to ensure products are destroyed in a reasonable time frame after the Army Veterinarians recommend destruction of the products. If Implemented, this recommendation should result in monetary savings to the government. 4. Before shipping excess to theater, review the worldwide excess stock of operational rations and identify the expiration dates on products that may be considered for shipping to replenish operational ration stock in theater. Before shipping stock, coordinate with the Theater Food Advisor to ensure the products can be incorporated into the existing stock on hand and be effectively managed. Also, don’t consider for shipment any products with less than 4-months’ remaining shelf life unless the Army Veterinarians have inspected and extended the shelf life of the products. In such cases, ensure the documentation accompanies the shipments. 5. Implement a Quality Assurance Surveillance Plan that encompasses all requirements of the prime vendor contract. Require the Administrative Contracting Officer and the contracting officer representative located at the prime vendor’s location in Kuwait to monitor and document the contractor’s performance using the Quality Assurance Surveillance Plan on a scheduled basis. Upon completion of each review, the Contracting Officer should review the results of the Quality Assurance Surveillance Plan and determine if any actions are required to correct the areas of concern. For the Commander, Defense Supply Center Philadelphia and for the Commander, Coalition Forces Land Component Command 6. Require the Theater Food Advisor and Defense Supply Center Philadelphia to review the quantities of operational rations that are currently excess in the prime vendor’s warehouses and ensure none of these products have orders placed until the excess quantities are projected to be depleted. If implemented, this recommendation will result in funds put to better use. For the Commander, Coalition Forces Land Component Command 7. Require the Theater Food Advisor to periodically review the inventory of government-owned operational rations and ensure appropriate action is taken when products reach their expiration date but remain in the prime vendor’s inventory. If implemented, this recommendation should result in monetary savings to the government. A-1. Ensure that the Defense Contract Audit Agency remains actively involved in monitoring the contractor’s costs. For the Assistant Secretary of the Army (Acquisition, Logistics and Technology) B-1. Develop Army guidance for approving contract requirements for deployment operations to include acquisition approval thresholds, members of joint acquisition review boards, and documentation of board actions. C-1. Establish guidance addressing how to transfer government property to contractors in the absence of a government property officer to conduct a joint inventory. C-2. Issue specific policy on (i) screening the contingency stocks at Fort Polk for possible use on current and future Logistics Civil Augmentation Program contracts, and (ii) returning commercial- type assets to the contingency stocks at Fort Polk after specific contract operations/task orders are completed. C-3. Update Army Materiel Command Pamphlet 700-30 to include specific procedures on: screening the contingency stocks at Fort Polk for possible use on current and future Logistics Civil Augmentation Program contracts. returning commercial-type assets to the contingency stocks at Fort Polk after contracts are completed. disposing of obsolete or unusable property. D-1. Include in an annex to AR 715-9 (Contractors Accompanying the Force) the key management controls related to Logistics Civil Augmentation Program, or specify another method for determining whether the management controls related to the program are in place and operating. For the Deputy Chief of Staff, G-4 1. Authorized Stockage Lists (Inventory On-Hand): Army should issue a change to policy and update AR 710-2 to require forward distribution points in a deployed environment to hold review boards for authorized stockage lists when they deploy and no less often than quarterly thereafter. Require review boards to accept recommendations from dollar cost banding analyses or justify why not. Improvements needed to better meet supply parts demand. A-1. Develop policy and procedures for the program executive office community to follow to identify, declare, and return excess components to the Army supply system. A-2. Develop and issue guidance that states ownership of Army Working Capital Fund (AWCF) components that subordinate management offices possess and control through modification, conversion, and upgrade programs resides with the Army supply system. Concurred, closed, implemented Nonconcurred, closed, not implemented Nonconcurred, closed, not implemented Partially concurred, closed, not implemented Concurred, closed, implemented Concurred, open A-3. Make sure policy is clear on the responsibilities of program executive offices and their subordinate management offices in complying with established Army policy and procedures for asset accountability. Specifically, record and account for all Army assets in a standard Army system that interfaces with the Army system of accountability. As a minimum, make sure item managers: have all transactions and information on acquisition, storage, and disposition of their assets. are notified of any direct shipments so that the item managers can record the direct shipments to capture demand history for requirements determination. A-1. Construct permanent or semipermanent facilities in Kuwait and Iraq in locations where a continued presence is expected and that have a large number of containers being used for storage, force protection, and other requirements. For those locations where construction of permanent or semipermanent facilities isn’t feasible, use government-owned containers to meet storage, force protection, and other requirements. A-2. Align the Theater Container Management Agency at the appropriate command level to give it the authority to direct and coordinate container management efforts throughout the Central Command area of responsibility. A-3. Direct the Theater Container Management Agency to develop and maintain a single theater container management database. Issue guidance that requires all activities in the area of responsibility to use this database for their container management. A-4. Coordinate with Military Surface Deployment and Distribution Command to purchase commercial shipping containers in the Central Command area of responsibility that are currently accruing detention. In addition, discontinue use of the Universal Service Contract and only use government-owned containers or containers obtained under long-term leases for future shipment of equipment and supplies into the Central Command area of responsibility. Ensure any long-term lease agreements entered into include provisions to purchase the containers. A-5. Coordinate with Military Surface Deployment and Distribution Command to either get possession of the 917 government-owned containers still in the carriers’ possession, obtain reimbursement from the carriers for the $2.1 million purchase price of the containers, or negotiate with the carriers to reduce future detention bills by $2.1 million. A-6. Coordinate with Military Surface Deployment and Distribution Command to reopen the 6-month review period under the post- payment audit clause to negotiate with commercial carriers to either obtain reimbursement of $11.2 million for detention overcharges on the 29 February 2004 detention list, or negotiate with the carriers to reduce future detention bills by $11.2 million. A-7. Perform either a 100-percent review of future detention bills or use statistical sampling techniques to review carrier bills prior to payment. B-1. Include the minimum data requirements identified in the July 2004 DOD memorandum that established policy for the use of radio frequency identification technology in the statements of work for task order 58 and all other applicable task orders. For the Deputy Chief of Staff, G-4 1. Clarify accountability requirements for rapid fielding initiative (RFI) property distributed through program executive officer (PEO) Soldier; specifically, accountability requirements for organizational clothing and individual equipment (OCIE) items when not issued by a central issued facility (CIF). For the Program Executive Officer, Soldier and For the Executive Director, U.S. Army Research, Development and Engineering Command Acquisition Center 2. Instruct the appropriate personnel at the rapid fielding initiative warehouse to complete and document causative research within 30 days of inventory. Have the causative research: identify documents used in the causative research process and the procedures followed to resolve the error in the results of the causative research. identify the circumstances causing the variance. make changes to operating procedures to prevent errors from recurring. include government approval signatures before processing inventory adjustments and a system for tracking inventory adjustments so managers can cross-reference adjustments and identify those representing reversals. 3. Assign a quality assurance representative to the rapid fielding initiative warehouse that can provide the appropriate contract oversight and prompt feedback to the contractor on accountability and performance issues. Direct the individual to coordinate with the contracting officer to ensure the contracting officer incorporates instructions for evaluating contract requirements into key documents, such as a surveillance plan and an appointment letter. 4. Coordinate with the contracting officer to instruct the contractor to include the results of performance metrics related to inventory adjustments, location accuracy, inventory accuracy, and inventory control in the weekly deliverables or other appropriate forum. Have the contractor also include a spreadsheet with the overall accountability metric in the weekly reports for each line item and a continental United States (CONUS) fielding accountability spreadsheet after each fielding is completed. The data fields would include: overall inventory control accountability would include: Prior week ending inventory balance + all receipts and returns for the current week = all shipments from the warehouse + ending inventory on hand. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 5. Direct the RFI contracting officer technical representative from program executive officer Soldier to work together with the contracting officer to develop a surveillance plan and provide the plan to the contract monitor. Include in the plan provisions for spot- checks if developers rely on the contractor’s quality control plan. A-1. Coordinate with the Deputy Chief of Staff, G-3 to develop guidance that instructs deploying units on protecting automation equipment from voltage differences and extreme environmental temperature conditions. A-2. Direct all units in the Kuwaiti area of operations to provide controlled temperature conditions for automation equipment. A-3. Instruct all units arriving in the Kuwaiti area of operations on how to protect automation equipment from voltage differences. B-1. Declassify the order that identifies which combat service support automation management office units should contact for assistance. A-1. Evaluate lessons learned from Operation Iraqi Freedom. As appropriate, adjust force structure requirements for military police and transportation personnel during the Total Army Analysis and contingency operations planning processes. A-2. Reduce the number of trucks assigned to the aerial port of debarkation to better reflect actual daily requirements. Coordinate with the Air Force at the aerial port of debarkation to obtain advanced notice of air shipments on a daily basis. Monitor use periodically to determine if future adjustments are required. A-3. Reestablish a theater distribution management center and make it responsible for synchronizing overall movement control operations for the Iraqi theater of operations. Coordinate with the Multi-National Force-Iraq to establish a standardized convoy tracking and reporting procedure. A-1. Coordinate with depots currently using local databases to track receipt transactions and develop a standard database that can be used by all depots to effectively track receipts from arrival date to posting. Each depot should be required to use this comprehensive database to track receipts and monitor the suspense dates to ensure receipts are posted to the Standard Depot System within the time standards. At a minimum, this database should include: start and completion dates for key management controls. date of arrival. receipt control number and date assigned. cross Reference Number assigned by the Standard Depot System. suspense dates (when receipt should be posted to record). date of physical count and reconciliation to receipt documentation. if receipt required Report of Discrepancy be sent to shipper and date report was sent if required. daily review control (list of receipts that are approaching required posting date). date stored. date posted. reason for not posting within required time frame. A-2. Initiate a change to Army Materiel Command Regulation 740- 27 to incorporate steps for identifying misplaced or lost labels in depot quality control checks, command assessments, and other tools used to measure depot performance. A-3. Fully use performance indicators (Depot Quality Control Checks, 304 Reports, and command assessments) as management tools to ensure necessary management controls are in place and operating for all depots’ receipt process. Also, ensure depots have effective training programs that consist of both on-the- job training and formal training to ensure depot personnel are aware of key controls and their responsibilities. Provide training on weaknesses and negative trends identified during biannual command assessments. A-4. Assign receipt control numbers based on the date the receipt arrived, and accountability transfers from transporter to depot. A-5. Submit Reports of Discrepancy to shipper for all discrepancies between physical counts and receipt documents, including when no receipt documents are received. A-6. Post receipts to records in temporary location, when it meets the requirement for a reportable storage location, to ensure receipt transactions are posted so that munitions can be made visible for redistribution in a timely manner. For the Commander, U.S. Army Communications- Electronics Command 1. Reemphasize to item managers to use supply document transactions, as specified in AR 725-50, to generate due-ins in command’s wholesale asset visibility system when directing the movement of military equipment items to a conversion contractor. 2. Direct item managers to use a GM fund code in disposition instructions to troop turn-in units and materiel release orders to storage activities directing shipments of equipment items to conversion contractors or to an Army depot maintenance facility. 3. Request the Logistics Support Activity to assign Routing Identifier Codes and related DOD Activity Address Codes for all conversion contractor operating locations where the contractor maintains quantities of items in the conversion process, but doesn’t presently have the codes. For future conversion contracts develop a process to ensure that all required codes are assigned immediately following contract award. 4. Reemphasize to item managers to: monitor asset visibility system management reports for creation of due-ins. require immediate corrective actions when due-ins aren’t created in the asset visibility system. 5. Reemphasize to item managers the requirement to perform follow up on due-ins when receipts aren’t posted in command’s asset visibility system within time periods stated in AR 725-50. 6. Incorporate into the current and all future conversion contracts, in coordination with the appropriate Project/Program Managers, the requirement for conversion contractors to transmit supply document transactions to the asset visibility system at Communications-Electronics Command in order to report: receipts of assets upon arrival at the contractor’s plant. changes in item configurations during the conversion process. shipments to gaining activities following conversion operations. 7. Until the conversion contracts are modified as detailed in Recommendation 6, require operating personnel to obtain all necessary supply documents and manually enter all necessary transactions into command’s asset visibility system to report receipts at contractor locations from turn-in units and storage activities, changes in equipment item configurations, and shipments of converted items to gaining activities. 8. Take appropriate actions to ensure unused component parts returned from conversion programs are not improperly reported in command’s asset visibility system as complete military equipment systems. Specifically, for National Stock Number 5840-01-009- 4939: request an inventory at the depot storage activity to identify all component parts improperly returned as complete systems. use the inventory results to adjust on-hand quantities in command’s asset visibility system to ensure accurate balances. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 9. Direct the Tobyhanna Army Depot maintenance facility to take all actions necessary to ensure appropriate supply document transactions are processed when equipment items are received, converted, and transferred back to storage ready for issue. 10. Direct operating personnel to evaluate all Communications- Electronics Command equipment items undergoing disassembly, conversion, modification, or overhaul programs to determine if the same processes used for the items discussed in this report are applicable to them. If so, require operating personnel to apply the recommendations in this report to those affected items. For the Commander, U.S. Army Materiel Command 1. Establish Army guidance requiring integrated materiel managers to perform annual reviews of holding project assets and follow up on redistribution actions. 2. Direct commodity commands to redistribute holding project assets to other pre-positioned stock projects or to general issue. 3. Direct commodity commands to dispose of excess, unserviceable, and obsolete assets in holding projects. Direct materiel managers to review the 38 bulky items in holding projects to identify excess assets and dispose of them. 4. Establish guidance on the use of holding projects that requires managers to either provide a documented rationale for retaining excess assets in holding projects or dispose of them. Include in the guidance the requirement that inventory management commanders or their designees review the retention rationales for approval or disapproval. 5. Establish guidance that requires materiel managers to review holding projects annually to identify unserviceable (condemned, economically unrepairable, and scrap) and obsolete assets in holding projects. Include in the guidance the requirement that the identified assets be disposed of within 12 months. For the Joint Munitions Command 1. Use the integration plan to manage the integration of automatic identification technology in receiving and shipping processes, as well as the seal site program. At a minimum, the plan should be periodically reviewed to make sure: adequate workforces are dedicated for integration tasks in the future. equipment and software are thoroughly tested and determined to be functional before being fielding to ammunition storage activities. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented 2. Require contractor to use Standard Depot System’s composition rules and traditional edit checks in software development for the remaining applications to automatic identification technology. The development should include the: use of established performance measures to ensure that all the contractor’s products and services meet Joint Munitions Command’s automatic identification technology needs, such as appropriate edit checks before fielding. development of specific tasks with timelines to ensure that established implementation goals are met in the most effective and efficient manner. This should include penalties to ensure timely delivery of necessary equipment and software applications from contractors. A-1. Establish procedures that ensure commands and units reduce training ammunition forecasts when units determine that training ammunition requirements have changed. B-1. Make sure ammunition supply point personnel follow procedures to post all ammunition supply transactions in the Training Ammunition Management System on the day the transaction occurs. B-2. Make sure the ammunition supply point has procedures to maintain updated plan-o-graphs that show the locations and lot numbers of the ammunition stored in the ammunition supply point bunkers and includes the procedures in the supply point’s standing operating procedures. B-3. Develop a plan to establish a reliable quality assurance specialist (ammunition surveillance) capability for the ammunition supply point and California Army National Guard units. Include in the plan an evaluation of whether the California Guard should have an internal quality assurance capability instead of relying on a memorandum of agreement with Fort Hunter-Liggett. B-4. Correct the contingency ammunition control problems at California Guard units by: identifying all contingency ammunition that is currently on-hand at all California Guard units and establishing proper accountability over the ammunition. preparing a serious incident report if the amount of ammunition unaccounted for that is identified at the units meets the criteria in AR 190-40. ensuring that units and the ammunition supply point follow established procedures for maintaining all issue and turn-in documentation for security ammunition to support the quantities recorded on the units’ hand receipt. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented B-5. Follow procedures for reviewing and updating security and contingency ammunition requirements. At a minimum: determine ammunition requirements based on threat assessments, potential missions and force structure available to provide a response. coordinate and establish a current ammunition distribution plan. conduct an annual review of ammunition requirements. maintain a list of where ammunition is being stored for State contingency by type and quantity. B-6. Make sure units follow the requirement to provide all small arms supply transactions to the U.S. Property and Fiscal Office within 5 working days so that the DA central registry can be updated within 10 working days. B-7. Make sure units follow the checklist in AR 190-11 related to physical security over the storage of small arms and document the results of their inspections. For the Commander, Eighth U.S. Army 1. Take appropriate action to perform and document required Operational Project reviews. Specifically: establish and prescribe guidelines and criteria that will inject more discipline into the Operational Project review and validation process. Prescribe key factors, best practices, and methods for determining and documenting Operational Project requirements. have each project proponent perform an analysis each year in accordance with the annual review process in Army Regulation 710-1 and whenever the Operational Plan changes. The project proponent should include an updated letter of justification that references where each project’s list of requirements originated and how the quantities for each item were computed. after receiving the official response from the project proponent, Eighth Army, G-4, War Reserve, should submit a memorandum to Headquarters, DA, G-4 for the purpose of documenting the annual review. 2. Have the War Reserve Branch track completion of annual reviews and 5-year revalidations; periodically review documentation of reviews and revalidations to evaluate their sufficiency. For the Deputy Chief of Staff, G-3 1. Develop and apply detailed criteria to assess the adequacy of operational project packages and the validity of related requirements, and approve only those projects that meet the criteria. 2. Establish criteria and guidelines that require proponent commands to identify and prioritize mission essential equipment in operational projects. Establish a policy to fund the higher priority items first. For the Deputy Chief of Staff, G-3 and For the Deputy Chief of Staff, G-4 3. Establish and prescribe guidelines and criteria that will inject more discipline into the operational project requirements determination process. Prescribe key factors, best practices, and methods for determining and documenting operational project requirements. For the Deputy Chief of Staff, G-4 4. Designate only commands with clear or vested interest in projects as the proponents. 5. Provide guidance to project proponents that outline strategies and methodologies for reviewing and revalidating operational projects. 6. Track completion of reviews and 5-year revalidations, periodically review documentation of reviews and revalidations to evaluate its sufficiency, and reestablish the enforcement policy that would allow cancellation of operational projects when proponents don’t perform timely, adequate reviews or revalidations. Consider having a formal Memorandum of Agreement with Army Materiel Command to track operational project reviews and revalidations. 7. Revise guidance requiring annual reviews for all operational projects to consider the individual characteristics of projects when scheduling the frequency of reviews. For the U.S. Army Aviation and Missle Command 1. Instruct the responsible item managers to: initiate actions to dispose of quantities that exceed documented requirements for the seven items identified. determine if it’s economical to reduce the planned procurement quantities excess to requirements for the five items identified. For those that are economically feasible, take action to reduce planned procurement quantities. If these actions are implemented, we estimate they will result in potential monetary savings of about $1.7 million. For the Commanding General, Combined Joint Task Force 180 1. Build semi-permanent storage facilities for class I supplies at Bagram and Kandahar, including facilities for dry and frozen goods storage. 2. Direct base operations commanders to record all containers purchased with Operation Enduring Freedom funds in the installation property books. In addition: conduct a 100-percent physical inventory of shipping containers at each installation. record all leased and purchased containers in the property book. Make sure the serial numbers of the shipping containers are recorded, too. establish procedures with the contracting office to ensure that the installation property book officer is given documentation when containers are purchased or leased. For the Commander, Combined Joint Task Force 180 1. Increase the size of the supply support activity in Bagram to 1,700 line items of authorized stockage list to ensure the availability of critical aviation spare parts. 2. Require the supply support activity officer to hold inventory reviews every 30 days or less with aviation maintenance units to ensure adequate inventory levels of items on the authorized stockage list. 3. Place Army expeditors—“the go-to guys”—familiar with class IX aviation spare parts at choke points located in Germany in the Army and Air Force delivery system to prioritize pallets and shipments. For the Deputy Chief of Staff, G-4 1. Establish theater DOD activity address codes for units to fall in on when assigned to Operation Enduring Freedom. For the Deputy Chief of Staff, G-4 1. Issue guidance directing activities to attach radio frequency tags to shipments en route to the Operation Enduring Freedom area of responsibility. Enforce requirements to tag shipments by directing transportation activities not to allow the movement of cargo without a radio frequency tag attached. 2. Direct Military Traffic Management Command to obtain radio frequency tag numbers from activities shipping goods and to report those tag numbers to transportation officers by including them in the in-transit visibility (ITV) Stans report. 3. Issue additional guidance to activities clarifying procedures they should follow for the retrograde of radio frequency tags and to replenish their supply of tags. For the Joint Logistics Command 1. Make sure movement control teams tag shipments as required by US Central Command guidance to ensure that improvements continue during future rotations. A-1. Direct responsible activities to: validate current requirements for subproject PCA (authorizing chemical defense equipment for 53,000 troops) to augment U.S. Army Europe’s second set deficiencies and submit the requirements to DA for approval in accordance with AR 710-1. revalidate requirements for chemical defense equipment for project PCS (see PCA), including the addition of equipment decontamination kits. Revise requirements for chemical defense equipment for the Kosovo Force mission and submit the changes to DA. A-2. Ask Army Materiel Command to fully fill revised requirements for chemical defense equipment for operational project PCS and to redistribute or dispose of excess items from operational projects PCA and PBC. Concurred, closed, implemented Concurred, closed, implemented Nonconcurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, not implemented B-1. Direct responsible activities to review and validate all project requirements for collective support systems as required by AR 710-1. C-1. Direct responsible activities to: ask DA to cancel subprojects PZP and PZQ (project codes to provide equipment for reception of reinforcing forces deploying to Europe and other theaters). develop requirements and request a new receiving, staging, onward movement and integration operational project, if needed, in accordance with AR 710-1. D-1. Ask DA to cancel operational subproject PYN (project code) for aircraft matting. D-2. Submit new operational project requirements for aircraft matting to DA in accordance with AR 710-1. A-1. Develop a system of metrics, to include performance goals, objectives, and measures, for evaluating the reliability of data in the capability. Establish processes for comparing actual performance to the metrics and taking remedial action when performance goals and objectives aren’t met. (Recommendation B-3 calls for a process to compare data in the capability and feeder systems. The results of these comparisons would constitute the actual data reliability performance.) A-2. Develop goals and objectives for use in evaluating the success of redistribution actions for Army assets. Develop procedures for identifying and correcting the causes for referral denials that exceed the established goals. B-1. Issue guidance to project and product managers detailing the proper use of bypass codes on procurement actions. B-2. Include definitive guidance on the use of bypass codes into appropriate guidance documents on The Army’s business processes, such as AR 710-1. Make sure the guidance explains the ramifications of using the different codes. B-3. Direct the Logistics Support Activity to perform periodic reviews of data in the capability to ensure that it agrees with data in feeder systems, and take action to identify and correct the causes for any differences. B-4. Require commodity commands to use the Post-award Management Reporting System to help manage contract receipts. Also, make sure the Logistics Modernization Program has the capability to manage invalid due-in records. B-5. Direct commodity commands to delete all procurement due-in records with delivery dates greater than 2 years old. Have the commodity commands research and resolve due-in records with delivery dates more than 90 days old but less than 2 years old. B-6. Direct commodity commands to review and remove invalid due-in records for field returns with delivery dates over 180 days. Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, not implemented Nonconcurred, closed, not implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, open B-7. Require commodity commands to periodically scan the Commodity System for procurement actions issued with bypass codes. Ask project and program managers to explain the decision to use a bypass code. Report the results of the review to the Assistant Secretary of the Army (Acquisition, Logistics and Technology). If the Logistics Modernization Program continues to employ bypass codes or other methods that prevent the creation of a due-in record, conduct similar reviews when the Logistics Modernization Program is implemented. C-1. Incorporate instructions on the use of the capability into appropriate guidance documents on The Army’s logistics business processes, such as AR 710-1. These instructions should address topics such as reviewing the capability for excess items before procuring additional stocks. C-2. Direct the Logistics Support Activity to review data in the Army Total Asset visibility capability for potentially erroneous data. Establish a procedure for reporting the potentially erroneous data to the activities responsible for the data and performing research to determine the validity of the data. D-1. Revise AR 710-2 and 710-3 to comply with the requirements of AR 11-2. Specifically: develop management control evaluation checklists addressing the accuracy and reliability of data in the Army Total Asset visibility capability and publish these controls in the governing Army regulations, or identify other evaluation methods and include these in the applicable Army regulations. For the Commander, U.S. Army Materiel Command 1. Emphasize to the commodity commands the need to periodically review the process for creating asset status transactions in the Commodity Command Standard System to ensure the transactions are properly created and forwarded to the Logistics Support Activity. 2. Revise Automated Data Systems Manual 18-LOA-KCN-ZZZ-UM to require activities to promptly submit monthly asset status transactions to the Logistics Support Activity. For the Commander, U.S. Army Materiel Command Logistics Support Activity 3. Establish procedures for notifying source activities when the capability rejects asset status transactions. Make sure that rejected and deleted transactions are reviewed to identify reasons for the transactions being rejected or deleted. If appropriate, correct the rejected transactions and resubmit them for processing to the capability. Based on the results of the reviews, take appropriate action to correct systemic problems. 4. Establish a control log to monitor participation of Army activities in the monthly asset status transaction process. Use the log to identify activities that didn’t submit a monthly update and determine why an update wasn’t submitted. Report frequent abusers of the process through appropriate command channels. 5. Report to the Deputy Chief of Staff, G-4 that AR 710-3 needs to be revised to require activities to promptly submit monthly asset status transactions to the Logistics Support Activity. 6. Document the process used to update information in the asset visibility module of the Logistics Integrated Data Base. A-1. Obtain a document number from the installation property book office before ordering installation property or organizational clothing and individual equipment. Order only equipment and vehicles for valid requirements approved by the Joint Acquisition Review Board. A-2. Include written justification, analyses and study results in documentation for purchase requests and commitments before acquisition decisions are made. A-3. Determine the number of vehicles required for the mission. Consider adjusting dollar thresholds for approval by the Joint Acquisition Review Board. A-4. Establish written policy to secure explosives using the interim plan. Build a permanent secure area for explosives awaiting movement as soon as possible. A-1. When updating the variable cost-to-procure factor, make sure the following steps are completed until a system like activit-based costing is available to capture costs: develop cost data for each functional area using groups of well- trained, function experts. properly document the process used to develop costs. research and substantiate variances in cost data among buying activities. A-2. Make sure updates to the variable cost-to procure factor are given to each buying activity and properly input into the materiel management decision file in the Commodity Command Standard System. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, open A-3. Review the variable cost-to-procure elements in the materiel management decision file and determine which of the three variable cost-to-procure cost categories should be used to update each element. Provide this information to the buying activities for implementation. Do periodic checks to make sure the elements are updated properly. A-4. Review the other factors in the materiel management decision file mentioned in this report for accuracy, especially those that haven’t been updated in the past 2 years. Specifically make sure the buying activities update the following factors using data related to the commodity they manage: Variable Cost to Hold (General Storage Cost, Discount Rate, Storage Loss Rate, and Disposal Value). Probability of No Demands. Depot Cost Elements (Stock Issue Cost, Fixed Cost, Receipt Cost for Stocked Item, and Non-Stocked Cost). Percent Premium Paid. Add-Delete Demands. B-1. Have the Requirements Integrity Group (or a similar working group) periodically review the factors used in the economic order quantity/variable safety level model for accuracy—especially those discussed in this objective. Provide guidance to buying activities for properly updating factors and make sure updated factors are processed in the automated system. For the Assistant Secretary of the Army (Acquisition, Logistics and Technology) 1. Issue written policy prescribing the specific roles and responsibilities, processes, and key management controls for developing and integrating automatic identification technology into logistics processes. As a minimum, include requirements for funding, milestone decisions, in-process reviews, test and evaluation plans, life-cycle cost estimates, benefit analyses, coordination with other system developments, and transfer of finished products. Also, consider subjecting the Army’s development of automatic identification technology to the prescribed acquisition procedures of AR 70-1. 2. Prepare a business case analysis for each automatic identification technology application that the Army has ongoing and planned. Adjust applications, if appropriate, based on the results of the business case analyses. 3. Establish a central oversight control within the Army for automatic identification technology. As a minimum, set up a process to: monitor all development and funding within the Army for automatic identification technology. verify that similar developments aren’t duplicative. For the Commander, U.S. Army Training and Doctrine Command 4. Update the operational requirements document for automatic identification technology. As a minimum, determine the Army-wide need for standoff, in-the-box visibility and document the results in an updated operational requirements document. Revise the current version of AR 710-2 to make Dollar-Cost Banding mandatory. Set a date for implementing Dollar-Cost Banding that will allow for gradual implementation by major commands, divisions, and other activities with supply support activities. A-1. Issue a message to all major command and subordinate activities informing them of problems and best practices identified during our audit. Use the draft advisory message as a guide for preparing the message (Annex E). Advise major commands and divisions responsible for maintaining units on alert status for rapid deployment in response to a crisis to ensure their local policies (such as major command regulations or division Readiness Standing Operating Procedures) include the provisions outlined in the message. A-2. Modify AR 710-2 to include guidance for major commands and subordinate activities responsible for maintaining units on alert status for rapid deployment to follow to ensure adequate repair parts support during the initial period of deployment. As a minimum, require that divisions with alert units have: an assumption process in place that includes procedures for detailed planning of Class IX requirements. a deployment notification process in place with procedures for conducting a summary review of Class IX stocks planned for deployment, considering such factors as the deployment environment, anticipated operating tempo, or intensity of the operations. A-3. Modify DA Pamphlets 710-2-1 and 710-2-2 to include detailed procedures for divisions to follow to ensure alert forces have adequate Class IX repair parts support. Review the best practices outlined in this report (and the draft advisory message in Annex E) as a starting point for revising the pamphlets. A-4. Update Field Manual 10-15 (Basic Doctrine Manual for Supply and Storage) to reflect current policies and address the key procedures discussed earlier in this report. Additionally, update the field manual to provide guidance on such issues as: how to identify Class IX repair part requirements for alert forces. how to identify repair parts shortages and whether to requisition shortage items. what priority designator code to use for requisitioning parts during the assumption process and when in alert status. when to use pre-packaged inventories. when to pre-position parts at airfields (with alert force equipment). B-1. Include key management controls for alert forces in an appendix of AR 710-2 as prescribed by AR 11-2 or incorporate these controls into the existing Command Supply Discipline Program. Consider our list of key controls contained in Annex H to identify controls for inclusion in the regulation. The Director of Logistics Readiness, Air Force Deputy Chief of Staff for Installations, Logistics and Mission Support should: a. Require Air Force personnel to delete all invalid adjusted stock levels identified in the audit. b. Establish procedures to improve adjusted stock level management. Specifically, revise Air Force Manual 23-110 to: address the role of the Logistics Support Centers. Specifically, require Logistics Support Center personnel only approve base- initiated adjusted stock levels with sufficient justification on the Air Force Forms 1996, maintain all Air Force Forms 1996, and initiate the revalidation process. improve the revalidation process. Specifically, the guidance should contain the following controls: a revalidation checklist detailing procedures logistics personnel should use to revalidate adjusted stock levels. ensure personnel accomplish the revalidation every 2 years. a requirement to use Air Force Form 1996 to establish each adjusted stock level (including MAJCOM-directed adjusted stock levels) and include a detailed justification of the adjusted stock level purpose and duration. A.1. The Air Force Materiel Command Director of Logistics should: a. Direct air logistics center shop personnel to delete the invalid Credit Due In From Maintenance details identified by audit (provided separately). b. Establish procedures requiring an effective quarterly Credit Due In From Maintenance Reconciliation. Specifically, Air Force Manual 23-110, US Air Force Supply Manual, and Air Force Materiel Command Instruction 23-130, Depot Maintenance Material Control, should require maintenance personnel to provide written documentation for each Credit Due In From Maintenance detail (i.e., supported by a “hole” in the end item). If such supporting documentation is not provided, require retail supply personnel to delete the unsupported Credit Due In From Maintenance details. c. Develop training for air logistics center shop personnel regarding proper spare part turn-in and Credit Due In From Maintenance Reconciliation procedures. Specifically, the training should define the various ways to turn spare parts in, and the differences between each method, to include the impact of improperly turning in spare parts. In addition, proper Credit Due In From Maintenance Reconciliation procedures should be covered in depth to include training on what constitutes appropriate supporting documentation. A.2. The Air Force Materiel Command Director of Logistics should: a. Establish detailed procedures in Air Force Manual 23-110 on how an item manager should validate Due Out to Maintenance additives (i.e., what constitutes a Due Out To Maintenance additive, where the item manager can validate the additive, which priority backorders are associated with Due Out To Maintenance, etc.). Concurred, open b. Direct Warner Robins Air Logistics Center to rescind local policy allowing item managers to increase the Due Out To Maintenance additive quantity to account for install condemnations. c. Issue a letter to item managers reemphasizing the requirement to document the methodology used to validate changes to Due Out to Maintenance additives, and retain adequate support for the Due Out To Maintenance additive quantity. A.1. Air Force Materiel Command Directorate of Logistics and Sustainment personnel should update Air Force Materiel Command Manual 23-1, Requirements for Secondary Item, to: a. Include instruction on what information should be developed and retained to support estimated condemnation rates. The guidance should include maintaining documentation on key assumptions, facts, specific details, decision makers’ names and signatures, and dates of decisions so the condemnation percentage can be recreated. b. Establish sufficient guidance to instruct equipment specialists on managing parts replacement forecasting. Specifically, develop a standardized method to plan for replacement part acquisition while phasing out the old parts. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Correct the shop flow times for the 211 items with requirements discrepancies. b. Revise the process for computing shop flow times to adhere to DoD 4140.1-R, which requires the removal of awaiting maintenance and awaiting parts times from requirements computations. c. Evaluate the D200A Secondary Item Requirements System computer program to identify and correct the programming deficiencies adversely impacting the shop flow times computation. d. Complete the ongoing automation effort designed to eliminate manual processing errors. A.1. The Air Force Deputy Chief of Staff, Installations and Logistics, should: a. Revise Air Force Manual 23-110 to: (1) Provide supply discrepancy report missing shipment procedures consistent with Air Force Joint Manual 23-215 guidance. (2) Establish supply discrepancy report dollar value criteria consistent with DoD 4500.9-R guidance. b. Establish base supply personnel training requirements on supply discrepancy report procedures and communicate those requirements to the field. Request Defense Logistics Agency comply with procedures requiring depot supply personnel inspect packages and submit supply discrepancy reports when appropriate. A.1. The Air Force Deputy Chief of Staff, Installations and Logistics, should: a. Revise Air Force Manual 23-110 to (1) describe more thoroughly documentation requirements for data elements used to compute readiness spares package item requirements and (2) require all readiness spares package managers to attend training that includes an adequate explanation of data element documentation requirements. Concurred, open b. Upgrade the Weapons System Management Information System Requirements Execution Availability Logistics Model to (1) accept mechanical data element transfers directly from other source systems and (2) prompt readiness spares package managers to input documentation notations supporting the rationale of changes in readiness spares package data elements. A.1. The Air Force Materiel Command Directorate of Logistics and Sustainment personnel should: a. Reduce the stock level day standard value from 10 days to 4 days in the D200A Secondary Item Requirements System. b. Develop and implement an automated method in the Advanced Planning and Scheduling system to measure the actual order and ship time needed to replenish depot level maintenance serviceable stock inventories. c. Develop and implement an interim method to measure or estimate depot order and ship time until an automated method is developed. A.1. The Deputy Chief of Staff, Installations and Logistics, Directorate of Logistics Readiness should require the Distribution and Traffic Management Division to: a. Direct Transportation Management Office personnel to communicate to the consignors the cost and timing benefits to move shipments via door-to-door commercial air express carrier service when eligible based on DoD and Air Force guidance. If the consignor refuses the cost-effective mode, require a waiver letter expressing the need to use the Air Mobility Command carrier. b. Develop criteria to allow consignors to adequately identify priority requirements and assign appropriate priority designator codes when shipping assets via Air Mobility Command airlift. This criteria should be included in Air Force Instruction 24-201. c. Instruct Transportation Management Office personnel to properly review all shipping documentation to ensure all required information is completed by the consignor prior to accepting cargo for movement to the Air Mobility Command aerial port. A.1. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Establish procedures to properly budget for delayed discrepancy repair requirements by accounting for the eventual return and repair of unserviceable items in the requirements/budget process starting with the March 2005 computation cycle. b. Develop procedures or include an edit in the new system that flags additives and prompts the item manager to perform thorough reviews of additive requirements. c. Develop a process that requires program managers, item managers, and other applicable program directorate personnel to periodically review program and mission direct additive requirements to verify that duplication has not occurred. Concurred, closed, implemented Concurred, open d. Inform all item managers and air logistics center managers that it is an inappropriate use of mission direct additives to retain excess inventory or preclude contract terminations. Additionally, reiterate regulatory guidance delineating the approved process for retaining excess materiel and preventing contract terminations. A.1. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Direct item managers to correct erroneous requirements identified during this review. b. Revise Air Force Materiel Command Manual 23-1 to clarify procedures for adjusting low demand item requirements. Specifically, ensure the guidance clearly states item managers may restore previously decreased requirements to their original level. A.1. The Air Force Materiel Command Director of Logistics and Sustainment should: a. Direct item managers to correct all erroneous requirements computations and related budgets identified during this review. b. Revise Air Force Materiel Command Manual 23-1 to correct guidance conflicts. Specifically, ensure the guidance only contains the correct standards requirements (3 days for base processing times and 10 days for reparable intransit times). A.1. The Air Force Materiel Command Director of Logistics should revise Air Force Materiel Command Manual 23-1 to: a. Require item managers review and identify excess next higher assemblies that could be used to satisfy indentured item repair, as well as buy, requirements. b. Provide specific procedures for item managers to follow to satisfy the indentured item buy and repair requirements. Revise training, and then train item managers to use indentures system data to identify excess next higher assemblies that could be used to satisfy indentured item requirements. B.1. The Air Force Material Command Director of Logistics should: a. Require equipment specialists correct inaccurate indentures system data. b. Publish the draft guidance requiring equipment specialists ensure indentures system data accuracy. c. Train equipment specialists to maintain indentures system data accuracy. The Air Force Materiel Command Director of Logistics should: a. Collect the unserviceable parts identified during the audit from the contractors or adjust the price of those parts (FY 2000-2002, $238.9 million and estimated FY 2003, $79.6 million). b. Establish a mechanism to track the issue and return of parts issued to customers who subsequently provide those parts to contractors as prescribed in Air Force Manual 23-110, Volume I, Part 3, Chapter 7. c. Either revise the policy to issue parts to customers who subsequently provide those parts to contractors at standard price or develop a due-in-from-maintenance-like control to adjust the part’s price if the unserviceable parts are not returned. A.1. The Deputy Chief of Staff, Installations and Logistics should: a. Revise Air Force Instruction 21-104 to require engine managers to input a follow-on tasked unit into the requirements computation system as a single unit. b. Modify PRS software to compute spare engine needs based on the combined flying hours for follow-on tasked units. A.1. The Air Force Materiel Command Supply Management Division should: a. Implement corrective software changes to the Secondary Item Requirements System and Central Secondary Item Stratification Subsystem systems to remove the Other War Reserve Materiel requirements from the Peacetime Operating Spares requirements and report Other War Reserve Materiel requirements separately. b. Implement interim procedures to remove Other War Reserve Materiel requirements from the Peacetime Operating Spares requirements and budget and report Other War Reserve Materiel requirements separately until they implement Recommendation A.1.a. A.1. The Air Force Materiel Command Director of Logistics should: a. Direct maintenance management personnel to provide adequate oversight to ensure maintenance personnel turn in all aircraft parts to the Weapon System Support Center or courtesy storage areas. b. Revise Air Force Materiel Command Instruction 21-130 directing air logistics center Weapon System Support Center management to establish a supply inventory monitor to oversee maintenance work areas ensuring excess parts are turned in to Weapon System Support Center or courtesy storage areas. Reemphasize the regulatory requirement (Air Force Materiel Command Instruction 21-130) to the air logistics center maintenance supervisors to assign a maintenance inventory control monitor to oversee the maintenance areas and ensure maintenance personnel tag and label all parts with the applicable aircraft number and the serviceability condition. Request that the Air Force Materiel Command Director of Logistics include Air Force Logistics Management Agency Stocking Policy 11 in the Readiness Base Leveling system to calculate C-5 forward supply location spare parts stock levels. Instruct item manager specialists that Air Force Form 1996 is not required to maintain Army Materiel Command Forward supply secondary item requirements in the Secondary Item Requirements System. A.1. The Air Force Materiel Command Director of Logistics should: a. Remove the D200A Secondary Item Requirements System automatic asset balance variance adjustment. b. Establish training requirements for air logistics center personnel on how to research and resolve D200A Secondary Item Requirements System asset balance variances. Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, not implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, not implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented c. Revise the Air Force Materiel Command Manual 23-1 to require that item managers defer an item’s buy and/or repair requirement until reconciling any asset balance variance greater than a specified threshold (variance percent, quantity, and/or dollar value). d. Establish asset balance variance oversight procedures to verify item managers resolve asset balance variances. A.1. The Air National Guard, Deputy Chief of Staff, Logistics, should: a. Address to subordinate units the importance of following Air Force equipment guidance related to small arms accountability, inventory, documentation, storage, and disposal, and the competitive marksmanship program. b. Request the Air National Guard Inspector General to include small arms accountability, inventory, documentation, storage, and disposal requirements as a special emphasis area in unit inspections. B.1. The Air National Guard, Deputy Chief of Staff, Logistics, should: a. Direct all Air National Guard units revalidate small arms and conversion kit requirements using Allowance Standard 538. b. Recompute requirements (including M-16 conversion kits), reallocate small arms on-hand based on adjusted authorizations, and adjust requirements and requisitions, as needed, following the reallocations. A.1. The Air Force Materiel Command Director of Logistics should revise Air Force Manual 23-110 to include specific material management transition guidance. Specifically, the guidance should require: a. Transition gaining locations to have a training plan in place to ensure personnel are adequately trained before working asset buy and repair requirement computations. b. Air Force Materiel Command personnel to establish a transition team to monitor all stages of the transition, to include ensuring personnel are adequately trained and providing additional oversight over requirement computations worked by new item managers. Revise Standard Base Supply System transaction processing procedures to automatically select special requisition Air Force routing identifier codes. Issue guidance to base supply personnel reminding them of proper receipt transaction procedures. Discontinue the automated transaction deletion program since the revised Standard Base Supply System procedures render the program obsolete. C.2. The Deputy Chief of Staff, Installations and Logistics should: a. Revise Air Force Manual 23-110 to direct working capital fund managers to input reversing entries that will correct erroneous transactions identified during monthly M01 list reviews. b. Direct all base supply working capital fund managers to: (1) Review the most current M01 list to evaluate the propriety of all transactions affecting the Purchases at Cost account. (2) Input reversing entries to correct any erroneous transactions identified during the M01 list review. This will correct all deficiencies, including those described in Results-A and Results-B. A.1. The Air Force Reserve Command, Deputy Chief of Staff, Logistics, should: a. Address to subordinate units the importance of following Air Force equipment guidance related to small arms accountability, inventory, documentation, storage, and disposal. b. Request the Air Force Reserve Command Inspector General to include small arms accountability, inventory, documentation, storage, and disposal requirements as a special emphasis area in unit inspections. B.1. Air Force Reserve Command, Deputy Chief of Staff, Logistics, should: a. Request all Air Force Reserve Command units revalidate small arms and conversion kit authorizations using Allowance Standard 538. b. Recompute requirements (including M-16 conversion kits), reallocate small arms on-hand based on recomputed authorizations, and adjust requirements and requisitions, as needed, following the reallocations. Finalize and issue the revised Air Force Manual 23-110 requiring personnel to identify and timely return secondary items to the primary control activity. Finalize and issue the revised Air Force Manual 23-110 requiring personnel to research and validate credit due on repairable items returned to the primary control activity. The Office of the Commander, U.S. Fleet Forces Command should: 1. Emphasize Chief of Naval Operations requirements that all ships maintain proper inventory levels based on authorized allowances and demand history. 2. Emphasize Chief of Naval Operations and Naval Supply Systems Command internal control procedures to ensure inventory levels in the Hazardous Material Minimization Centers remain within the authorized limits, and return material exceeding requisitioning objectives to the supply system. 3. Emphasize Chief of Naval Operations requirements that ships requisition only hazardous materials authorized for shipboard use, and return unauthorized material to the supply system. 4. Enforce Naval Supply Systems Command requirements that ships prepare and submit Ship’s Hazardous Material List Feedback Reports and Allowance Change Requests, whenever required. The Naval Supply Systems Command should: 5. Establish an interface between authorized allowance documents and the Type-specific Ship’s Hazardous Material List to ensure that hazardous material items authorized for shipboard use also have authorized allowance levels. 6. Establish procedures to validate Hazardous Material Minimization Centers low and high inventory levels with those inventory levels in Relational Supply for the same items to ensure Hazardous Material Minimization Centers high limits do not exceed Relational Supply high limits. 7. Establish procedures that require unissued hazardous material in the Hazardous Material Minimization Centers be counted as on-hand inventory before reordering Relational Supply stock. 8. Develop and implement a hazardous material usage database that accumulates and retains data on supply system hazardous material ordered and used by the ship for use in planning future hazardous material requirements. 9. Establish procedures to ensure that Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program technicians perform tasks in accordance with the Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program Desk Guide. 10. Establish a working group to determine the feasibility for the development of ship-specific allowance-control documents for all items managed in the Hazardous Material Minimization Centers not already on an approved allowance list. The Office of the Commander, U.S. Fleet Forces Command should: 11. Return the prohibited undesignated hazardous material items to the supply system for credit. The Naval Sea Systems Command, with the assistance of Naval Supply Systems Command should: 12. Establish formal written guidance stating what system allowance list hazardous material is designated for and their current quantities allowed. Guidance should include requisitioning metrics that cross check hazardous material items against designated system designs as generated by Naval Inventory Control Point and Naval Surface Warfare Center Carderock Division – Ship System Engineering Station, technical manuals, and one-time General Use Consumable List. 13. Clarify Naval Sea Systems Command Instruction 4441.7B/Naval Supply Systems Command Instruction 4441.29A to measure the quality of hazardous material load outs instead of the quantity or percentage of hazardous material loaded on ships. The Office of the Supervisor of Shipbuilding, Conversion, and Repair Newport News should: 14. Discontinue requisitioning aircraft cleaning, maintenance, and preservation hazardous material for actual aircraft before Post Shakedown Availability. 15. Establish formal written local procedures that require detailed support, justification, and audit documentation for system validation on all hazardous material requisitions received from ship personnel after Load Coordinated Shipboard Allowance List delivery. This support should indicate the specific system the item is required for and the document numbers for Preventative Maintenance Schedule, Maintenance Request Cards, Allowance Equipage List, Allowance Parts List, General Use Consumables List, and technical manuals. An Allowance Change Request should be included, if applicable. 16. Use Outfitting Support Activity when requisitioning all hazardous material items for ship initial outfitting to minimize local procurement as required by the Navy Outfitting Program Manual of September 2002. The Naval Supply Systems Command should: 17. Enforce compliance with established guidance for material offloads to ensure a uniform use of DD Form 1348 documents among ships and the proper processing of Transaction Item Reporting documents to ensure inventory accuracy. 18. Update the Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program Desk Guide to include specific requirements for the Enhanced Consolidated Hazardous Material Reutilization and Inventory management Program Afloat Program technician when offloading Naval Supply Systems Command-owned hazardous material. The Naval Inventory Control Point should: 1. In coordination with Naval Air Systems Command, update policy and procedures issued to field activities on managing and reporting aircraft engine/module container inventory. 2. Require Fleet activities to provide a daily transaction item report of all intra-activity receipts and issues of engine/module containers to item managers. 3. Establish controls to ensure containers are not procured in excess of requirements. Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Concurred, closed, implemented Nonconcurred, closed, implemented Concurred, closed, implemented 4. Include the Aircraft Engine Container Program as an assessable unit in Naval Inventory Control Point’s Management Control Program. The Naval Air Systems Command should: 5. Fully fund the engine/module repair container program in accordance with requirements generated by Naval Inventory Control Point. 6. Report any engine/module containers costing $5,000 or more in the Defense Property Accounting System. The Naval Inventory Control Point and Naval Air Systems Command should: 7. Require Naval Aviation Depots, Aircraft Intermediate Maintenance Depots, and Fleet activities to perform periodic inventories of engine/module containers, and report the results to Naval Inventory Control Point’s item managers. The Commandant of the Marine Corps should: 1. Terminate the Norway Air-Landed Marine Expeditionary Brigade program. 2. Prepare a comprehensive statement encompassing disposal costs, equipment condition, and status of outstanding procurements and repairs of the excess onhand ground equipment and supplies, and identify Norway Air-Landed Marine Expeditionary Brigade program items that would satisfy outstanding procurements and repairs for fiscal year 2003 and the out years. 3. Cancel the planned modernization procurements associated with the replacement of Norway Air-Landed Marine Expeditionary Brigade equipment, subject to negotiated termination costs for one of the six modernization projects. 4. Cancel all procurements that replenish Norway Air-Landed Marine Expeditionary Brigade preposition inventory shortages. The Deputy Chief of Naval Operations, Warfare and Requisitions Programs should: 1. Perform analyses to establish validated engine readiness requirements, incorporate ready-for-training engine readiness rates for training aircraft engines, and establish separate requirements for different categories of aircraft (such as combat, support, and training). 2. Formally document the engine requirements and supporting rationale in Department of the Navy guidance. The Deputy Chief of Naval Operations, Fleet Readiness and Logistics should: 3. Coordinate with Naval Inventory Control Point and Naval Air Systems Command to require more realistic parameter inputs to the Retail Inventory Model for Aviation while encouraging engine maintenance strategies that will ultimately reduce turn around time and increase reliability (mean time between removal). 4. Issue written guidance to assign responsibility for calculating engine war reserve requirements and the need to compute additional war reserve engine/module requirements. The Deputy Chief of Naval Operations, Warfare Requirements and Programs should: 5. Adjust out-year F414-GE-400 engine and module procurement requirements (to be reflected in the President’s 2004 Budget) to agree with Naval Inventory Control Point’s revised Baseline Assessment Memorandum 2004 requirements. The Commander, Naval Inventory Control Point should: 6. Reiterate Secretary of the Navy policy that documentation supporting official Baseline Assessment Memorandum submissions be retained for no less than 2 years. The Deputy Chief of Naval Operations, Fleet Readiness and Logistics should: 7. In coordination with Deputy Chief of Naval Operations, Warfare Requirements and Programs, establish policy and adjust the procurement strategy for F414-GE-400 engines and modules to procure (based on current audit analyses) approximately 30 percent whole engines and 70 percent separate engine modules and thereby improve the engine/module repair capability. 8. Issue guidance requiring Naval Air Systems Command to determine, and annually reevaluate, the engine-to-module procurement mix for the F414-GE-400. The Commander, Naval Air Systems Command should: 9. Reduce out-year AE1107C spare engine procurement by 12 (changed to 8 after receipt of management comments) through fiscal year 2008. 10. Adhere to the Chief of Naval Operations-approved model (Retail Inventory Model for Aviation) for calculations of spare engine requirements. The Deputy Chief of Naval Operations, Warfare Requirements and Programs should: 11. Adjust planned out-year Aircraft Procurement, Navy-6 (APN-6) procurement requirements to reduce the quantities of T700-401C Cold and Power Turbine Modules by 10 each. The Commandant of the Marine Corps should: 1. Validate the Time-Phased Force Deployment Database equipment requirements and determine how the Marine Corps will source (make available) the equipment required and determine if the equipment required is on the unit’s table of equipment. 2. Evaluate the Asset Tracking Logistics and Supply System II+ to determine if it adequately meets user needs and, if not, take sufficient action to correct identified deficiencies. 3. Perform onsite technical assessments to determine the extent of required maintenance/repair. 4. Provide dedicated organic or contract resources to reduce maintenance backlogs. 5. Establish an acceptable level of noncombat deadline equipment relative to the total combat deadline equipment and total equipment possessed and report outside the unit to the Marine Expeditionary Force commander. This would help ensure that the extent of nonmajor maintenance/repair requirements receives appropriate visibility and support requests for resources to reduce maintenance backlogs. Create a Joint Logistics Command: Responsible for global end-to-end supply chain, That includes the U.S. Transportation Command mission, Defense Logistics Agency, service logistics and transportation commands as components to Joint Logistics Command with: Regional Combatant Commanders retaining operational control of the flow of in-theater logistics; and Program managers retaining responsibility for lifecycle logistics support plan and configuration control. Lead the work to create an integrated logistics information system. Appoint an external advisory board of relevant industry experts to assist in guiding this effort. Specific recommendations made for tactical supply, theater distribution, strategic distribution, national- and theater-level supply, and command and control. Supply chain planning needs to be better integrated with a common supply chain vision. The newly designated distribution process owner (U.S. Transportation Command), in concert with the Army, the other services, and the Defense Logistics Agency, should develop and promulgate a common vision of an integrated supply chain. The complementary, not redundant roles, of each inventory location, distribution node, and distribution channel should be defined. Every joint logistics organization should examine and refine its processes to ensure detailed alignment with this vision. Review doctrine, organizational designs, training, equipment, information systems, facilities, policies, and practices for alignment with the supply chain vision and defined roles within the supply chain. The assumptions embedded within the design of each element of the supply chain with regard to other parts of the supply chain should be checked to ensure that they reflect realistic capabilities. Improve the joint understanding of the unique field requirements of the services. Likewise, the services need to understand the Defense Logistics Agency, the U.S. Transportation Command, and the General Services Agency processes and information requirements, as well as those of private-sector providers. Metrics should be adopted to maintain alignment with the vision. Logistics information systems need adequate levels of resources to provide non-line-of-sight mobile communications and effective logistics situational awareness in order to make new and emerging operational and logistics concepts feasible. Deliberate and contingency planning should include improved consideration of the logistics resource requirements necessary to execute sustained stability and support operations. Resourcing processes should consider uncertainty and implications of capacity shortages. The flexibility of financial and resource allocation processes to rapidly respond to the need for dramatic changes in logistics capacity that sometimes arises from operational forecast error should be improved. Logistics resource decisions should more explicitly consider how much buffer capacity should be provided in order to handle typical operational and demand variability without the development of large backlogs. Joint training should be extended to exercise the entire logistics system. The Army should review all wartime and contingency processes from the tactical to the national level to determine which are not exercised in training with all requisite joint organizations participating. Such processes range from setting up tactical logistics information systems to planning a theater distribution architecture to determining national level spare parts distribution center capacity requirements. Review which tasks and processes do not have adequate doctrine and mission training plans. Planning tools and organizational structures need to better support expeditionary operations. Automation should more effectively support the identification of logistics unit requirements to support a given operation. Unit “building blocks” should be the right size and modular to quickly and effectively provide initial theater capabilities and then to facilitate the seamless ramp-up of capacity and capability as a deployment matures. Conclusions and recommendations fall into three categories: programmatic, constructive, and operational. Programmatic conclusions and recommendations include: logistics transformation and interoperability. If interoperability is important to transformation, the Office of the Secretary of Defense must fund it adequately and specifically, not just the component systems and organizations being integrated. Services and agencies will be reluctant to act against their own financial interest. Title 10 can be used to prevent joint logistics transformation and interoperability, and needs clarification. If a Logistics Command is created, Title 10 may need to be amended. Expanded Office of the Secretary of Defense leadership (beyond technical standardization) for joint logistics transformation is necessary to effect change. The Logistics Systems Modernization office efforts to realign business processes and to prioritize rapid return on investment initiatives are a good start and can be expanded. A 4-Star Combatant Command – U.S. Logistics Command – in charge of logistics needs to be created, following the example of the U.S. Strategic Command. The responsibilities and enforcement powers of this Logistics Command may be significantly different than the U.S. Strategic Command model and require clear specification. Some responsibilities that this Command could undertake include: Defining the distribution authorities, scenarios, business processes and process ownership at the “hand-off” from U.S. Transportation Command distribution to services distribution. Developing doctrine and implementing joint business processes and rules for logistics interoperability between services, prioritizing known problem and conflict areas, and assigning ownership of business processes across the broader Supply Chain Operations Reference-defined supply chain. Identifying budget requirements for logistics interoperability, and requiring logistics interoperability to be adequately funded and planned as part of the acquisition process of any logistics systems. Accelerating interoperability testing of all Global Combat Support System implementations both within and across services and agencies, with a spiral development methodology. Coordinating and communicating various isolated ongoing efforts in defining logistics Extensible Markup Language schema, business processes, databases, published web services and other joint logistics projects, with the Integrated Data Environment and Enterprise Resource Planning programs underway in the services and agencies. Where conflicts, redundancies or gaps are identified, the U.S. Logistics Command may function as an “honest broker” to develop an interoperable solution, or as a “sheriff” to enforce an interoperable solution. A single logistics business process modeling needs to be created as a common reference, with the understanding that the modeling effort will be descriptive rather than prescriptive, due to Services’ autonomy and the need to continue migrating legacy systems and building new logistics capability. Since all Services, Agencies and the Office of the Secretary of Defense are employing the Supply Chain Operations Reference Model for logistics, some degree of commonality should already exist. If the process modeling effort can build on existing U.S. Transportation Command/Defense Logistics Agency business process models, and incorporate business process models from each of the Services, it may be available earlier and used more effectively. A “greenfield” effort may have limited utility and never get beyond the requirements stage. Efforts to align logistics data are underway within the Joint Staff Logistics Directorate, and in the ongoing U.S. Transportation Command/Defense Logistics Agency modeling. The touchpoints between these alignment efforts and the actual Enterprise Resource Planning implementations within the services and joint agencies could be expanded. A variety of “to-be” logistics business process models must be generated to meet the requirements of varying future war fighting scenarios. For example, loss of space assets or enemy use of electromagnetic pulse will create significant constraints on logistics interoperability, and contingency business processes should be designed for those scenarios. The logistics business process must be defined from end-to- end at the DOD level, and then Services and Agencies must assess how they will or will not align with those processes. Alignment, interoperability and jointness are consensus goals for system development, but some Service decisions not aligned with specific DOD level processes may provide net benefits and increase the robustness of the overall logistics System of Systems (the federated supply chain, or loosely-coupled approach). The ongoing questions that the U.S. Logistics Command will address are these: Should the default state for interoperability be alignment, with non- alignment developed as a scenario-based exception? Or should the default state for interoperability be non-alignment, with occasional moments of alignment (specific data feeds of a finite duration)? Some form of charter or statutory legislation is needed to prevent joint logistics transformation from backsliding into non-interoperable organizations and systems, when leadership changes. Change management for joint logistics needs to be resourced specifically, in addition to current resources for logistics transformation within services and joint agencies. Fuse the logistics and transportation functions into an integrated U.S. Logistics Command. Implement the Beyond Goldwater-Nichols Phase I recommendation to merge much of the Joint Staff Directorate of Logistics with its Office of the Secretary of Defense counterpart, the Deputy Under Secretary of Defense (Logistics & Material Readiness) into an office that reports to both the Under Secretary for Technology, Logistics, and Acquisition Policy. The public sector should seek to bolster the fault tolerance and resilience of the global container supply chain. The closure of a major port-for whatever reason-would have a significant effect on the U.S. economy. The federal government should lead the coordination and planning for such events for two reasons. First, the motivation of the private sector to allocate resources to such efforts is subject to the market failures of providing public goods. Second, the government will be responsible for assessing security and for decisions to close and reopen ports. Security efforts should address vulnerabilities along supply- chain network edges. Efforts to improve the security of the container shipping system continue to be focused on ports and facilities (although many ports around the world still failed to meet International Ship and Port Security Code guidelines even after the July 1, 2004, deadline.) Unfortunately, the route over which cargo travels is vast and difficult to secure. Measures to keep cargo secure while it is en route are essential to a comprehensive strategy to secure the global container supply chain. Research and development should target new technologies for low-cost, high-volume remote sensing and scanning. Current sensor technologies to detect weapons or illegal shipments are expensive and typically impose significant delays on the logistics system. New detection technologies for remote scanning of explosives and radiation would provide valuable capabilities to improve the security of the container shipping system. level logistics and Army/Land component logistics requirements and the need for a joint theater-level logistics commander. Codify in joint doctrine the distinction between joint theater Document a Joint Theater Sustainment Command and assign to Combatant Commands. Implement useful practices of other services. Don’t preclude early use of Logistics Civilian Augmentation Program. Complete a thorough business-based cost/benefit analysis of Radio Frequency Identification before spending more money on it. Make directive authority for the Combatant Command real. Joint doctrine must: Be prescriptive in its language, purging words like “should” and “attempt” and replacing them with specific direction. Be joint and comprehensive. It must explicitly address the joint organizational structure and staffing, develop and institutionalize joint processes and procedures, and specifically require, not assume, the necessary communications infrastructure and information tools to support this vision. Support an expeditionary logistics capability to enable rapid deployment and sustainment of flexible force structures in austere theaters around the globe. Reconcile with the emerging concepts of net-centric warfare and sense and respond logistics, balancing past lessons with the needs for the future. Joint doctrine must be based on today’s capabilities, not tomorrow’s promises. Continue to identify the combatant commander as the locus of control for logistics in support of deployed forces, and specify the tools, forces, processes, and technologies required from supporting commands. Develop a true expeditionary logistics capability. Develop logistics systems able to support expeditionary warfare. Logistics systems must be designed, tested, and developed to support a mobile, agile warfighter. Logistics capabilities need to be native to an expeditionary unit for swift and agile deployment. The people, equipment, and systems that accompany these small, cohesive units must be able to integrate data within the services and commands as well as among the coalition partners. Logistics communications planning and infrastructure are an integral part of any operation, and must be robust, fully capable, and deployable in both austere to developed environments. Planning and development of the required infrastructure must consider the issues of bandwidth, mobility, security and aggregation of logistics data. Retool the planning processes. A follow-on replacement for the current Time-Phased Force and Deployment Data /Joint Operation Planning and Execution System process is required, with the necessary improvements in task structures and planning speed. This process should directly drive sustainment planning, including acquisition and distribution decisions. The challenge of requirements identification and fulfillment in a deployed environment is a joint challenge. Planning tools must be developed that recognize and fuse the consumption of materiels and fulfillment of warfighter requirements across the joint force. The speed and flexibility of future operations demand that a closer and more dynamic relationship be developed with suppliers in the industrial base and prime vendor partners. Create an integrated theater distribution architecture Theater distribution capability must be embedded in a permanent organization within the theater or at least rapidly deployable to any global location. The balance of reserve forces and the implications of the activation cycle must be considered in the development of this organizational structure and manning. The need for a joint in-theater distribution cross dock, staging, and break-bulk operation must be explicitly recognized in every Combatant Command Area of Responsibility. Rapid maneuver and task reorganization precludes a 100% “pure pallet” shipment. Retrograde and reverse logistics capabilities must also be embedded. Leadership must recognize that the growth and development of “joint logisticians” who can operate and lead effectively in the theater environment will take time and effort, potentially altering established career progression plans. Resolve the technology issues. Rationalize logistics systems. Current battlefield and deployment realities include the existence of multiple systems for logistics support. DOD must complete and deploy an integrated architecture, including operational, systems, technical, and data elements to streamline the systems capabilities to the joint warfighter, and manage the portfolio of systems to eliminate those that cannot support the future state. Create visibility within logistics and supply systems that extends to the tactical units. Today’s warfighting mission includes mobile expeditionary engagements. Support systems need to include the ability to communicate and synchronize with rear support units and systems 24 hours a day, 365 days a year in both austere and developed environments. Ensure communications capability and availability for logistics, the environment. Logistics is an information- intensive function with constant requirements for updated information. Logistics support planning needs to include communications-level planning and should be completed before deployment. Development of the foundational role of the Distribution Process Owner. The Distribution Process Owner concept must be implemented swiftly and should recognize the potential resource requirements in the near- and mid-term to complete this task. This is a necessary first step, addressing distribution challenges, and should facilitate the establishment of an integrated, end-to-end logistics architecture, eliminating the confederation of stovepipes. Financial and transactional systems should not be a hindrance to going to war: They must be designed so that the transition from peace to war is seamless; the ability to employ these systems in a deployed environment must take precedence over garrison requirements. More emphasis needs to be placed on managing retrograde and repairables. Processes must be synchronized and integrated across the stovepipes. Synchronize the chain: from Continental United States to Area of Responsibility. Capacities across the distribution nodes and distribution links, and across the entire logistics network but particularly in theater, must be reviewed, understood, and actively managed. The ability to determine and manage practical and accurate throughput capacities for air and seaports, along with an understanding of the underlying commercial infrastructure is essential to future planning. The ability to evaluate possible scenarios for host nation support is also critical. Deploy Performance Based Logistics agreements more comprehensively. Standardize Performance Based Logistics implementation. Implementation of Performance Based Logistics must become more standard to prevent confusion with other contractor support services and activities. To the extent possible, common metrics and terms must be developed and applied. Implement Performance Based Logistics across total weapons systems. Support broad end-to-end application. Much integration and synchronization is required to ensure full system synchronization of performance metrics but the end capability of tracking total system performance to both cost and “power by the hour” is a significant potential advancement in warfighter support. Make Radio Frequency Identification real. Extend Radio Frequency Identification to the warfighter. Asset tracking system capabilities, infrastructure, and support must extend to the farthest reaches of the logistics supply chain, even in austere environments. Do not combine U.S. Transportation Command and Defense Logistics Agency. Roles, missions and competencies of the two organizations are too diverse to create a constructive combination. Organizational merger would not significantly facilitate broader transformational objectives of supply chain integration. Both organizations perform unique activities/functions in the supply chain. The real problem is not that the two organizations are separate, but that their activities are not well integrated. Elevate leadership for Department of Defense global supplies chain integration. Designate a new Under Secretary of Defense for Global Supply Chain Integration reporting directly to the Secretary of Defense. Ensure the Global Supply Chain Integration is a civilian with established credibility in the field of supply chain management. Establish the Global Supply Chain Integration’s appointment as a fixed term for a minimum of 6 years. Direct the U.S. Transportation Command and the Defense Logistics Agency to report to Global Supply Chain Integration. Create a working relationship for the Global Supply Chain Integration with the Chairman of the Joint Chiefs of Staff. Build the Global Supply Chain Integration’s staff from existing staffs in the Office of the Secretary of Defense, the U.S. Transportation Command, and the Defense Logistics Agency. Empower a Global Supply Chain Integrator with the required authority and control to effect integration. The Global Supply Chain Integrator should be granted authority to: Build end-to-end integrated supply chains through the establishment of policies and procedures. Enable privatization and partnering with global commercial distributors. Oversee program management decisions related to major systems vendor support. Establish/authorize organizations and processes to control flow during deployment/wartime scenarios. Control budgetary decisions affecting the U. S. Transportation Command, the Defense Logistics Agency, and the distribution budgets of the services. In addition to the contacts named above, key contributors to this report were Thomas W. Gosling, Assistant Director, Susan C. Ditto, Amanda M. Leissoo, Marie A. Mak, and Janine M. Prybyla. | Military operations in Iraq and Afghanistan have focused attention on the Department of Defense's (DOD) supply chain management. The supply chain can be critical to determining outcomes on the battlefield, and the investment of resources in DOD's supply chain is substantial. In 2005, with the encouragement of the Office of Management and Budget (OMB), DOD prepared an improvement plan to address some of the systemic weaknesses in supply chain management. GAO was asked to monitor implementation of the plan and DOD's progress toward improving supply chain management. GAO reviewed (1) the integration of supply chain management with broader defense business transformation and strategic logistics planning efforts; and (2) the extent DOD is able to demonstrate progress. In addition, GAO developed a baseline of prior supply chain management recommendations. GAO surveyed supply chain-related reports issued since October 2001, identified common themes, and determined the status of the recommendations. DOD's success in improving supply chain management is closely linked with its defense business transformation efforts and completion of a comprehensive, integrated logistics strategy. Based on GAO's prior reviews and recommendations, GAO has concluded that progress in DOD's overall approach to business defense transformation is needed to confront problems in other high-risk areas, including supply chain management. DOD has taken several actions intended to advance business transformation, including the establishment of new governance structures and the issuance of an Enterprise Transition Plan aligned with the department's business enterprise architecture. As a separate effort, DOD has been developing a strategy--called the "To Be" logistics roadmap--to guide logistics programs and initiatives across the department. The strategy would identify the scope of logistics problems and capability gaps to be addressed and include specific performance goals, programs, milestones, and metrics. However, DOD has not identified a target date for completion of this effort. According to DOD officials, its completion is pending the results of the department's ongoing test of new concepts for managing logistic capabilities. Without a comprehensive, integrated strategy, decision makers will lack the means to effectively guide logistics efforts, including supply chain management, and the ability to determine if these efforts are achieving desired results. DOD has taken a number of actions to improve supply chain management, but the department is unable to demonstrate at this time the full extent of its progress that may have resulted from its efforts. In addition to implementing audit recommendations, DOD is implementing initiatives in its supply chain management improvement plan. However, it is unclear how much progress its actions have resulted in because the plan generally lacks outcome-focused performance metrics that track progress in the three focus areas and at the initiative level. DOD's plan includes four high-level performance measures, but these measures do not explicitly relate to the focus areas, and they may be affected by many variables, such as disruptions in the distribution process, other than DOD's supply chain initiatives. Further, the plan does not include overall cost metrics that might show efficiencies gained through the efforts. Therefore, it is unclear whether DOD is meeting its stated goal of improving the provision of supplies to the warfighter and improving readiness of equipment while reducing or avoiding costs. Over the last 5 years, audit organizations have made more than 400 recommendations that focused specifically on improving certain aspects of DOD's supply chain management. About two-thirds of the recommendations had been closed at the time GAO conducted its review, and most of these were considered implemented. Of the total recommendations, 41 percent covered the focus areas in DOD's supply chain management improvement plan: requirements forecasting, asset visibility, and materiel distribution. The recommendations addressed five common themes--management oversight, performance tracking, planning, policy, and processes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since plutonium production ended at the Hanford Site in the late 1980s, DOE has focused on cleaning up the radioactive and hazardous waste accumulated at the site. It has established an approach for stabilizing, treating, and disposing of the site’s tank wastes. Its planned cleanup process involves removing, or retrieving, waste from the tanks; treating the waste on site; and ultimately disposing of the lower-activity radioactive waste on site and sending the highly radioactive waste to a geologic repository for permanent disposal. As cleanup has unfolded, however, the schedule has slipped, and the costs have mounted. According to DOE’s latest estimate in June 2008, treatment of the waste is not expected to begin until late 2019 and could continue until 2050 or longer. The following two figures show a tank farm and construction of waste treatment plant facilities at the Hanford Site. Most of the cleanup activities at Hanford, including the emptying of the underground tanks, are carried out under the Hanford Federal Facility Agreement and Consent Order among DOE, Washington State’s Department of Ecology, and the federal Environmental Protection Agency. Commonly called the Tri-Party Agreement, this accord lays out legally binding milestones for completing the major steps of Hanford’s waste treatment and cleanup processes. The agreement was signed in May 1989 and has been amended a number of times since then. A variety of local and regional stakeholders, including county and local governmental agencies, citizen and advisory groups, and Native American tribes, also have long- standing interests in Hanford cleanup issues. Two primary contractors are carrying out these cleanup activities; one is responsible for managing and operating the tank farms, and the other for constructing the facilities to treat the tank waste and prepare it for permanent disposal. During our review, these contractors were CH2M Hill and Bechtel, respectively. Both contracts are cost-reimbursement contracts, which means that DOE pays all allowable costs. In addition, the contractors can also earn a fee, or profit, by meeting specified performance objectives or measures. Applicable DOE orders and regulations are incorporated into these contracts, either as distinct contract clauses or by reference. For example, contractors are required to use an accounting system that provides consistency in how costs are accumulated and reported so that comparable financial transactions are treated alike. Such a system is to include consistent practices for determining how various administrative costs are assessed or how indirect costs for labor are calculated. Contractors also are required to implement an integrated safety management system, a set of standardized practices that allow the contractor to identify hazards associated with a specific scope of work, to establish controls to ensure that work is performed safely, and to provide feedback that supports continuous improvement. The system, which allows contractors to stop work when conditions are unsafe, is intended to instill in everyone working at the site a sense of responsibility for safety. This policy is reinforced by labor agreements between the contractor and its workforce that explicitly allow work stoppages as needed for safety and security reasons. With few exceptions, DOE’s sites and facilities are not regulated by the Nuclear Regulatory Commission or by the Occupational Safety and Health Administration. Instead, DOE provides internal oversight at several different levels. DOE’s Office of River Protection oversees the contractors directly. In addition, the Office of Environmental Management provides funding and program direction. DOE’s Office of Enforcement and other oversight groups within the Office of Health, Safety, and Security oversee contractors’ activities to ensure nuclear and worker safety. Finally, the Defense Nuclear Facilities Safety Board, an independent oversight organization created by Congress in 1988, provides advice and recommendations to the Secretary of Energy to help ensure adequate protection of public health and safety. DOE officials reported that from January 2000 through December 2008, work on the Hanford tank farms and the waste treatment plant temporarily stopped at least 31 times to address various safety or construction concerns. These work stoppages ranged in duration from a few hours to more than 2 years, yet little supporting documentation of these occurrences exists. DOE reported that of the 31 work stoppages, 12 occurred at the tank farms and 19 at the waste treatment plant. Sixteen of the work stoppages reportedly resulted from concerns about safety. A complete listing of these work stoppages is included in appendix II. These work stoppages were initiated to respond directly to an event in which property was damaged or a person injured, or they addressed an unsafe condition with the potential to harm workers in the future. Four of these work stoppages were relatively brief, lasting less than 2 days, and were characterized by DOE and officials as proactive safety “pauses.” For example, in October 2007, after a series of slips, trips, or falls during routine activities, contractor managers stopped work at the waste treatment plant site for 1 hour to refresh workers’ understanding of workplace hazards. The following two examples, for which supporting documentation was available, illustrate the types of work stoppages occurring at the Hanford Site because of safety concerns: Controlling worker exposure to tank farm vapors. Beginning in 2002, as activities to transfer waste from leak-prone, single-shell tanks to more secure double-shell tanks disturbed tank contents, the number of incidents increased in which workers complained of illnesses, coughing, and skin irritation after exposure to the tank vapors. The Hanford underground storage tanks contain a complex variety of radioactive elements and chemicals that have been extensively mixed and commingled over the years, and DOE is uncertain of the specific proportions of chemicals contained in any one tank. These constituents generate numerous gases, such as ammonia, hydrogen, and volatile organic compounds, which are purposely vented to release pressure on the tanks, although some gases also escape through leaks. During the 1990s, the tank farm contractor evaluated potential hazards and determined that if workers around the tanks used respirators, they would be sufficiently protected from harmful gases. DOE reported in 2004, however, that disturbing the tank waste during transfers had changed the concentration of gases released in the tanks and that no standards for human exposure to some of these chemicals existed. To protect workers’ health, in 2004 the tank farm contractor equipped workers with tanks of air like those used by firefighters. Work at the tank farms stopped intermittently for about 2 weeks as a result, in part because the contractor had to locate and procure sufficient self-contained air and equipment for all workers. Accidental spill of radioactive and chemical wastes at tank S-102. In July 2007, as waste was being pumped out of a single-shell to a double-shell tank, about 85 gallons of waste was spilled. DOE has been gradually emptying waste from Hanford’s single-shell tanks into double-shell tanks in preparation for treatment and permanent disposal, but because the tank waste contains sludge and solids, waste removal has been challenging. Because the tanks were not designed with specific waste retrieval features, waste must be retrieved through openings, called risers, in the tops of the tanks; technicians must insert specially designed pumps into the tanks to pump the waste up about 45 to 60 feet to ground level. DOE has used a variety of technologies to loosen the solids, including sprays of acid or water to help break up the waste and a vacuum-like system to suck up and remove waste through the risers at the top. On July 27, 2007, during retrieval of radioactive mixed waste from a 758,000-gallon single-shell tank, a pump failed, spilling 85 gallons of highly radioactive waste to the ground. At least two workers were exposed to chemical vapors, and later several workers reported health effects they believed to be related to the spill. Retrieval operations for all single-shell tanks were suspended after the accident, and DOE did not resume operations until June 2008, a delay of 1 year, while the contractor cleaned up the spill and DOE and the contractor investigated the accident to evaluate the cause, the contractor’s response, and appropriate corrective action. DOE officials reported that the remaining 15 work stoppages resulted from concerns about construction quality and involved rework to address nuclear safety or technical requirements that had not been fully met, such as defective design, parts fabrication and installation, or faulty construction. For example: Outdated ground-motion studies supporting seismic design of the waste treatment plant. In 2002, the Defense Nuclear Facilities Safety Board began expressing concerns that the seismic standards used to design the waste treatment facilities were not based on the most current ground- motion studies and computer models or on the geologic conditions present directly beneath the construction site. After more than 2 years of analysis and discussion, DOE contracted for an initial seismic analysis, which confirmed the Defense Nuclear Facilities Safety Board’s concerns that the seismic criteria were not sufficiently conservative for the largest treatment facilities—the pretreatment facility and the high-level waste facility. Revising the seismic criteria caused Bechtel to recalculate thousands of engineering estimates and to rework thousands of design drawings to ensure that tanks, piping, cables, and other equipment in these facilities were adequately anchored. Bechtel determined that the portions of the building structures already constructed were sufficiently robust to meet the new seismic requirements. By December 2005, however, Bechtel estimated that engineering rework and other changes to tanks and other equipment resulting from the more conservative seismic requirement would increase project costs substantially and add as much as 26 months to the schedule. Ultimately, work on the two facilities was suspended for 2 years, from August 2005 until August 2007. About 900 workers were laid off as a result. DOE does not routinely collect or formally report information about work stoppages, in part because federal regulations governing contracts do not require contractors to track work stoppages and the reasons for them. While federal acquisition regulations do require that contractors implement a reliable cost-accounting system, the regulations do not require contractors to centrally collect information on the specific circumstances surrounding a work stoppage. Without a centralized system for collecting explanatory data on work stoppages, the majority of information DOE reported to us is based on contractors’ and DOE officials’ recollections of those events or on officials’ review of detailed logs maintained at each of the facilities. Officials expressed concern that systematically monitoring all work stoppages could send the message that work stoppages should be avoided, possibly hampering effective implementation of DOE’s integrated safety management policy. This policy explicitly encourages any employee to “stop work” to address conditions that raise safety concerns. Officials said they believe that work stoppages help bolster workplace safety and construction quality because work can be halted and corrective action taken before someone is seriously injured, property is seriously damaged, or poor workmanship has compromised the quality and functionality of a facility. Officials said that systematically monitoring all types of work stoppages could ultimately discourage workers from halting activities when unsafe conditions or construction problems emerge in the workplace. Under the terms of the cost-reimbursement contracts for the tank farms and the waste treatment plant, DOE generally pays the costs for corrective action or construction rework associated with temporary work stoppages and does not require the contractor to separately track these costs. Various categories of costs can be associated with work stoppages, with some easier to measure or separately identify than others. The category of costs related to correcting a problem that precipitates a work stoppage, such as the cost of investigating and cleaning up a hazardous waste spill or the cost of rework to address improper construction, is usually more easily measured. In contrast, lost productivity—expenditures for labor during periods workers were not fully engaged in productive work or the difference between the value of work that should have been accomplished against the value of work that was accomplished—is more difficult to quantify. Most of the work stoppages reported by DOE officials involved some corrective action or construction rework to address the problem precipitating the work stoppage. These are costs that tend to be easier to separately identify and track, and DOE has directed contractors to do so in certain instances, as it did for the July 2007 tank waste spill. For the work stoppages at the tank farms, corrective actions encompassed such activities as investigating and cleaning up the July 2007 spill, monitoring and testing vapors escaping from the tanks to determine the constituents, and training contractor employees on required new procedures or processes. For the work stoppages at the waste treatment plant, corrective actions at times involved retraining workers or developing new procedures to prevent future problems, although many of the work stoppages at the waste treatment plant involved construction rework. Construction rework can include obtaining new parts to replace substandard parts or labor and materials to undo installations or construction, followed by proper installation or construction—pouring new concrete, for example, or engineering and design work to address nuclear safety issues. The cost of lost productivity associated with a work stoppage can be more difficult to measure or separately identify, although under a cost- reimbursement contract, the government would generally absorb the cost. While no generally accepted means of measuring lost productivity exists, two methods have been commonly used. The first, a measure of the cost of idleness, or doing nothing, calculates the expense incurred for labor and overhead during periods that no productive work is taking place. These were the types of costs associated with a July 2004 suspension, or “stand- down,” of operations at the Los Alamos National Laboratory, where a pattern of mishaps led the contractor to stop most work at the facility for many months to address safety and security concerns. Laboratory activities resumed in stages, returning to full operations in May 2005. Although officials with both the National Nuclear Security Administration, which oversees the laboratory, and the Los Alamos contractor, tried to measure lost productivity at the laboratory, each developed widely differing estimates—of $370 million and $121 million, respectively—partly because of difficulties measuring labor costs. According to DOE officials, when work stopped at the Hanford Site tank farms, CH2M Hill reassigned workers to other productive activities. Therefore, according to DOE officials, no costs of idleness were incurred as a result of those work stoppages. We were unable to verify, however, that tank farm workers had been reassigned to other productive work after the S-102 tank waste spill or during other tank farm work stoppages. During the period that work stopped on the pretreatment and high-level waste facilities of the waste treatment plant, in contrast, the contractor substantially reduced its workforce. According to Bechtel officials and documents, about 900 of 1,200 construction workers were laid off during the work stoppage, and the remaining workers were employed on the other facilities under construction. An alternative means of measuring lost productivity associated with suspension of work activities is to measure the value of work planned that should have been accomplished but was not. This method concentrates on the work that was not done, as opposed to the cost of paying workers to do little or nothing. This method of measuring lost productivity is typically undertaken as part of a formal earned value management system, a project management approach that combines the technical scope of work with schedule and cost elements to establish an “earned value” for a specific set of tasks. If the earned value of work accomplished during a given period is less than the earned value of work planned for that period, then a loss in productivity has occurred, and the cost is equal to the difference in value between planned and finished work. DOE officials were unable to provide this measure for the three work stoppages that had supporting documentation, partly because the analyses of productivity under earned value management techniques did not disaggregate activities in a manner that could capture the three work stoppages. For example, with regard to the tank farms, DOE measures the overall progress made on waste stabilization and retrieval for all 177 storage tanks in aggregate but does not measure the direct impact of setbacks at any one storage tank, such as the spill at tank S-102. The contracts for the tank farms and the waste treatment plant do not generally require the contractors to separately track costs associated with work stoppages. Contractors must use an accounting system adequate to allow DOE to track costs incurred against the budget in accordance with federal cost-accounting standards. These standards permit a contractor to establish and use its own cost-accounting system, as long as the system provides an accurate breakdown of work performed and the accumulated costs and allows comparisons against the budget for that work. For the tank farm and waste treatment plant contracts, the contractors must completely define a project by identifying discrete physical work activities, essentially the steps necessary to carry out the project. This “work breakdown structure” is the basis for tracking costs and schedule progress. Corrective action and rework associated with work stoppages are generally not explicitly identified as part of a project’s work breakdown structure, although these costs are generally allowable and contractors do not have to account for them separately. Despite the lack of a requirement to track costs associated with work stoppages, DOE and contractors sometimes do track these costs separately, as in the following three circumstances: DOE can request the contractor to separately track costs associated with corrective action when DOE officials believe it is warranted. DOE specifically asked CH2M Hill to separately track costs associated with addressing the July 2007 tank spill because of the potential impacts on tank farm operations, workers, and the environment and because of heightened public and media attention to the event. Contractors may voluntarily track selected costs associated with a work stoppage if they believe that a prolonged suspension of work will alter a project’s cost and schedule. Contractors may want to collect this information for internal management purposes or to request an adjustment of contract terms in the future. For example, Bechtel estimated costs for both redesign work and lost productivity resulting from a change in seismic standards for the waste treatment plant. DOE may require a contractor to track particular costs associated with investigating an incident that it believes may violate DOE nuclear safety requirements or the Atomic Energy Act of 1954, as amended (these violations are referred to as Price-Anderson Amendment Act violations). DOE’s Office of Enforcement notifies the contractor in a “segregation letter” that an investigation of the potential violation will be initiated and that the contractor must segregate, or separately identify, any costs incurred in connection with the investigation. These are not costs of corrective action or rework. The costs incurred in connection with the investigation are generally not allowable. Not all such investigations involve a work stoppage, however. Of the 31 work stoppages reported to us by DOE officials, costs are available only for the July 2007 spill at the tank farm, since DOE specifically required the contractor to separately identify and report those costs. The costs of that incident totaled $8.1 million and included expenditures for cleaning up contamination resulting from the spill, investigating the causes of the accident, investigating health effects of the accident on workers, administrative support, and oversight of remediation activities. These were all considered allowable costs, and DOE has reimbursed the contractor for them. Although a subsequent investigation took place to determine whether nuclear safety rules had been violated, the costs to participate in that investigation ($52,913) were segregated as directed by DOE’s Office of Enforcement and were not billed to the government. Although DOE officials said that none of the reported work stoppages involved lost-productivity costs, the work stoppage to address the tank spill could well contribute to delays and rising costs for tank waste retrieval activities over the long run. Given that DOE was emptying only about one tank per year when we reported on Hanford tanks in June 2008, the 1-year suspension of waste retrieval activities, without additional steps to recover lost time, may contribute to delayed project completion. Many factors already contribute to delays in emptying the tanks. DOE has acknowledged that it will not meet the milestones agreed to with Washington State and the Environmental Protection Agency in the Tri-Party Agreement. We found that DOE’s own internal schedule for tank waste retrieval, approved in mid-2007, reflects time frames almost 2 decades later than those in the agreement. Ultimately, delays contribute to higher costs because of ongoing costs to monitor the waste until it is retrieved, treated, and permanently disposed of, and estimated costs for tank waste retrieval and closure have been growing. DOE estimated in 2003 that waste retrieval and closure costs from 2007 onward—in addition to the $236 million already spent to empty the first seven tanks—would be about $4.3 billion. By 2006, this estimate had grown to $7.6 billion. Because of limitations in DOE’s reporting systems, however, we were unable to determine the specific effect of the tank spill on overall tank retrieval costs beyond the $8.1 million in corrective action costs. In addition, although specific costs were not available for the 2-year suspension of construction activities at two of the facilities in the waste treatment plant, we have previously reported on some of the potential impacts. In an April 2006 testimony, we reported on the many technical challenges Bechtel had encountered during design and construction of the waste treatment plant. These ongoing technical challenges included changing seismic standards that resulted in substantial reengineering of the design for the pretreatment and high-level waste facilities, problems at the pretreatment plant with “pulse jet mixers” needed to keep waste constituents uniformly mixed while in various tanks, and the potential buildup of flammable hydrogen gas in the waste treatment plant tanks and pipes. In December 2005, Bechtel estimated that these technical problems could collectively add nearly $1.4 billion to the project’s estimated cost. Under the cost-reimbursement contracts for the tank farms and the waste treatment plant, costs associated with work stoppages, such as the costs of corrective action or construction rework, generally are allowable costs. As such, DOE generally pays these costs, regardless of whether they are separately identified or whether they are included in the overall costs of work performed. Even though the contractors are being reimbursed for the costs associated with work stoppages, they can experience financial consequences, either through loss of performance fee or fines and penalties assessed by DOE or its regulators. For example, DOE may withhold payment of a performance award, called a fee, from contractors for failure to meet specified performance objectives or measures or to comply with applicable environmental, safety, and health requirements. The tank farm and waste treatment plant contractors both lost performance fee because of work stoppages as follows: For the July 2007 spill at the tank farms, under CH2M Hill’s “conditional payment of fee” provision, DOE reduced by $500,000 the performance fee the contractor could have earned for the year. In its memo to the contractor, DOE stated that the event and the contractor’s associated response were not consistent with the minimum requirement for protecting the safety and health of workers, public health, and the environment. Nevertheless, DOE did allow CH2M Hill to earn up to $250,000, or half the reduction amount, provided the contractor fully implement the corrective action plan developed after the accident investigation, with verification of these actions by DOE personnel. Bechtel also lost performance fee because of design and construction deficiencies at the waste treatment plant facilities and the 2-year delay on construction of the pretreatment and high-level waste facilities. Overall, DOE withheld $500,000 in Bechtel’s potential performance fee for failure to meet construction milestones. In addition, DOE withheld $300,000 under the “conditional payment of fee” provision in the contract after a number of serious safety events and near misses on the project. Furthermore, in addition to having potential fee reduced for safety violations and work stoppages, DOE and other federal and state regulators may also assess fines or civil penalties against contractors for violating nuclear safety rules and other legal or regulatory requirements. These fines and penalties are one of the categories of costs that are specifically not allowed under cost-reimbursement contracts, and these costs are borne solely by the contractor. For example, DOE’s Office of Enforcement can assess civil penalties for violations of nuclear safety and worker safety and health rules. Both contractors were assessed fines or civil penalties for the events associated with their work stoppages. Fines and penalties assessed against CH2M Hill for the July 2007 tank spill totaled over $800,000 and included (1) civil penalties of $302,500 assessed by DOE’s Office of Enforcement for violation of nuclear safety rules, such as long-standing problems in ensuring engineering quality and deficiencies in recognizing and responding to the spill; (2) a Washington State Department of Ecology fine of $500,000 for inadequacies in design of the waste retrieval system and inadequate engineering reviews; and (3) a fine of $30,800 from the Environmental Protection Agency for delays in notification of the event. The contractor was required to notify the agency within 15 minutes of the spill but instead took almost 12 hours. From March 2006 through December 2008, DOE’s Office of Enforcement issued three separate notices of violation to Bechtel, with civil penalties totaling $748,000. These violations of nuclear safety rules were associated with procurement and design deficiencies of specific components at the waste treatment plant. In its December 2008 letter to the contractor, DOE stated that significant deficiencies in Bechtel’s quality-assurance system represented weaknesses that had also been found in the two earlier enforcement actions. For the majority of DOE’s reported work stoppages, no supporting documentation was available to evaluate whether better oversight or regulation could have prevented them. For two incidents for which documentation was available—internal investigations and prior GAO work—a lack of oversight contributed to both. These two work stoppages occurred at the tank farms and the waste treatment plant, and both resulted from engineering-design problems. In a third case—efforts to address potentially hazardous vapors venting from underground waste storage tanks—DOE’s efforts to enforce worker protections were found to have been inadequate, although this lack of oversight does not appear to have directly caused the work stoppage associated with the vapors problem. Insufficient oversight was a factor in these three events as follows: Accidental spill of radioactive and chemical wastes at tank S-102. Specifically, the accident investigation report for the tank farm spill found that oversight and design reviews by DOE’s Office of River Protection failed to identify deficiencies in CH2M Hill’s tank pump system, which did not meet nuclear safety technical requirements. The Office of River Protection failed to determine that this pump system did not have a needed backflow device to prevent excessive pressure in one of the hoses serving a tank, ultimately causing it to fail and release waste, which then overflowed from the top of this tank and spilled to the ground. In addition, the investigation found that CH2M Hill failed to respond to the accident in a timely manner and failed to ensure that nuclear safety requirements had been met. Outdated ground-motion studies supporting seismic design of the waste treatment plant. Lax oversight was also a factor in a second event at the waste treatment plant. GAO in 2006 found that DOE’s failure to effectively implement nuclear safety requirements, including requirements that all waste treatment plant facilities would survive a potential earthquake, contributed substantially to delays and growing costs at the plant. The Defense Nuclear Facilities Safety Board first expressed concerns with the seismic design in 2002, believing that the seismic standards followed had not been based on then-current ground-motion studies and computer models or on geologic conditions directly below the waste treatment plant site. It took DOE 2 years to confirm that the designs for two of the facilities at the site—the pretreatment and the high-level waste facilities— were not sufficiently conservative. Revising the seismic criteria required Bechtel to recalculate thousands of design drawings and engineering estimates to ensure that key components of these facilities would be adequately anchored. Work was halted at the two facilities for 2 years as a result. Controlling worker exposure to tank farm vapors. In 2004, DOE’s then Office of Independent Oversight and Performance Assurance (today reorganized as DOE’s Office of Health, Safety, and Security) investigated vapor exposures at the Hanford tank farms and the adequacy of worker safety and health programs at the site, including the adequacy of DOE oversight. Investigators were unable to determine whether any workers had been exposed to hazardous vapors in excess of regulatory limits but found several weaknesses in the industrial hygiene (worker safety) program at the site, in particular, hazard controls and DOE oversight. According to the investigation, the Office of River Protection had not effectively overseen the contractor’s worker safety program; had failed to provide the necessary expertise, time, and resources to adequately perform its management oversight responsibilities at the tank farms; and had failed to ensure corrective action for identified problems. After the investigation, DOE stepped up its monitoring efforts at the tank farms, and the contractor provided tank farm workers with supplied air, an action that slowed or halted work at the tank farms for about 2 weeks while supplied air equipment was secured and workers were trained to use it. With regard to regulations, however, officials we interviewed from DOE, the Defense Nuclear Facilities Safety Board, and the Office of Inspector General said they did not believe that insufficient regulation was a factor in these two events. Officials from the Nuclear Regulatory Commission declined to comment on the sufficiency of regulations. The final cost to the American public of cleaning up the Hanford Site is expected to reach tens of billions of dollars. Consequently, factors that can potentially escalate costs—including work stoppages—matter to taxpayers, DOE, and Congress. Depending on what causes a work stoppage and how long it lasts, some stoppages could increase already substantial cleanup costs. Although prudent oversight would seem to call for DOE to understand the reasons for work stoppages and the effects of these work stoppages on costs, neither law nor regulation requires that this information be systematically recorded and reported. DOE and other stakeholders have expressed reservations that collecting information on work stoppages could send a message that work stoppages should be minimized, thus discouraging managers or workers from reporting potential safety or construction quality issues. We recognize that the opportunity for any manager or worker to call a work stoppage when worker safety or construction quality is at stake is an integral part of DOE’s safety and construction management strategies and should not be stifled. Yet DOE has also recognized the importance of cost information and in one recent case—the 2007 tank waste spill—required the contractor to separately track detailed cost information. In addition, we previously recommended that DOE require contractors to track the costs associated with future work stoppages, similar to the one at Los Alamos National Laboratory in 2004, and DOE agreed with this recommendation. While acknowledging these competing pressures, we believe that systematically collecting cost information on selected work stoppages can increase transparency and yet balance worker and public safety. To provide a more thorough and consistent understanding of the potential effect of work stoppages on project costs, we recommend that the Secretary of Energy take the following two actions: (1) establish criteria for when DOE should direct contractors to track and report to DOE the reasons for and costs associated with work stoppages, ensuring that these criteria fully recognize the importance of worker and nuclear safety, and (2) specify the types of costs to be tracked. We provided a draft of this report to the Secretary of Energy for review and comment. In written comments, the Chief Operations Officer for Environmental Management generally agreed with our recommendations, stating that they will be accepted for implementation within the Environmental Management program. The comments (which are reproduced in app. III) were silent on whether the recommendations will be implemented in other DOE programs. In its comments, DOE expressed concern that readers of appendix II could misconstrue the information in the column labeled “Duration” as representing a delay in the entire listed project, not simply the time required to resolve the specific issue in question; DOE maintains that during this time, workers were shifted to other work activities. We found, however, that some of the short work stoppages, which DOE termed “safety pauses,” were specifically called to allow the contractor to refresh workers’ understanding of workplace hazards; in these cases, which were essentially training exercises, workers were not reassigned to other work activities. Other work stoppages may have led to workers’ assignment to other activities, but we were unable to verify to what extent reassignment occurred because the documentation available on work stoppages was limited. Finally, during the 2-year delay due to seismic concerns in waste treatment plant construction, work on two facilities—the pretreatment plant and high-level waste facility—was ultimately suspended from August 2005 until August 2007, and about 900 workers were laid off, not reassigned. We added a footnote to table 1 to clarify the “Duration” column. Regarding our discussion of the role of oversight in several work stoppages, DOE acknowledged that inadequate oversight was a factor in the cited work stoppages and stated that the Office of Environmental Management has implemented corrective actions to address these contributing factors. Evaluating these actions and the resulting outcomes, if any, however, was beyond the scope of our report. We incorporated other technical comments in our report as appropriate. As agreed with our offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Energy and interested congressional committees. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the number of times work was suspended at the Hanford site, we obtained from the Department of Energy’s (DOE) Office of River Protection officials a listing of work stoppages occurring from January 2000 through December 2008 at either the waste treatment plant or the tank farms. We did not review other work stoppages that may have occurred elsewhere at the Hanford Site during this period. We sought to independently verify the 31 work stoppages identified by DOE and to uncover additional information about them, including the nature of the event and the duration and the scope of each, by reviewing the following: DOE’s Occurrence Reporting and Processing System, a database of reportable accidents and other incidents affecting worker, public, and environmental safety; DOE’s database of investigation reports on accidents causing serious injury to workers or serious damage to the facility or the environment; DOE citations issued against contractors for violating nuclear safety Defense Nuclear Facilities Safety Board reports addressing Hanford Site Bechtel National Inc. and CH2M Hill Hanford Group Problem Evaluation Requests, internal reports of incidents or accidents involving safety issues. We were unable to independently verify DOE’s list of work stoppages from these sources, however, because in most cases, the reporting systems did not indicate whether safety incidents had halted work or, if so, for how long. In addition, these reporting systems focus on safety incidents and do not specifically address construction rework and design problems, which represent about half the work stoppages reported by DOE. Of the 31 work stoppages reported, however, we were able to obtain additional information from other sources for three specific events. These were (1) ongoing problems protecting workers from potentially harmful vapors venting from the tank farms, (2) a radioactive waste spill from tank S-102 in July 2007, and (3) the seismic redesign from August 2005 to August 2007 of the waste treatment plant pretreatment and high-level waste facilities. To obtain a more thorough understanding of these three work stoppages, what caused them, and how problems were corrected, we reviewed DOE, contractor, and Office of the Inspector General evaluations of these events, including official accident reports, external independent investigations, and our 2006 testimony on cost and schedule problems at the Hanford waste treatment plant. To determine the types of costs associated with work stoppages, we reviewed Federal Acquisition Regulation reporting requirements for cost- reimbursement contracts and Defense Contract Audit Agency guidance on auditing incurred costs. To gain a better understanding of the costs associated with lost productivity resulting from a work stoppage, we reviewed cost-estimating guidance from the Association for the Advancement of Cost Engineering International and earned value management guidance by GAO and by the National Research Council. To develop an understanding of the costs paid by the government, compared with those absorbed by the contractor, we reviewed Bechtel National Inc. and CH2M Hill Hanford Group requests to DOE for equitable adjustments to their respective contracts to recover lost productivity and other costs linked to work stoppages. We reviewed the Atomic Energy Act of 1954, as amended, and the letters sent from DOE to contractors requesting that they segregate costs incurred in connection with investigations of potential violations of the law and DOE nuclear safety requirements. We reviewed assessments by Washington State, DOE, and federal regulators fining Bechtel and CH2M Hill Hanford Group for safety violations and other problems at the Hanford Site since 2000. Finally, we interviewed contractor and Office of River Protection finance officials to determine cost-accounting requirements and practices. To determine whether more-effective regulation or oversight might have prevented the work stoppages, we relied primarily on Office of River Protection and Bechtel officials’ assessments of these events because supporting documentation was generally unavailable. For 3 of the 31 work stoppages, we reviewed numerous internal DOE, external independent, and contractor evaluations to assess whether lack of oversight was a contributing factor. To gain further perspective on how lack of oversight or regulations might have played a role in these work stoppages, we interviewed DOE headquarters officials with the Offices of Environmental Management; Health, Safety, and Security; and General Counsel. We interviewed officials with regulatory and oversight entities, including the Defense Nuclear Facilities Safety Board, the Occupational Safety and Health Administration, and the Nuclear Regulatory Commission. We also interviewed union representatives at the Hanford Site to obtain the union’s and workers’ perspectives on work stoppages and safety. We conducted this performance audit from June 2008 to April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We obtained and reviewed information on 31 work stoppages that occurred at the Hanford Site from January 2000 to December 2008; these are summarized in table 1. In addition to the individual named above, Janet Frisch, Assistant Director; Carole Blackwell; Ellen W. Chu; Brenna McKay; Mehrzad Nadji; Timothy M. Persons, Chief Scientist; Jeanette Soares; Ginny Vanderlinde; and William T. Woods made key contributions to this report. | The Department of Energy's (DOE) Hanford Site in Washington State stores 56 million gallons of untreated radioactive and hazardous wastes resulting from decades of nuclear weapons production. DOE is constructing facilities at the site to treat these wastes before permanent disposal. As part of meeting health, safety, and other standards, work at the site has sometimes been suspended to address safety or construction quality issues. This report discusses (1) work stoppages from January 2000 through December 2008 and what is known about them, (2) the types of costs associated with work stoppages and who paid for them, and (3) whether more effective regulation or oversight could have prevented the work stoppages. GAO interviewed knowledgeable DOE and contractor officials about these events. When documentation was available, GAO obtained DOE and contractor accident and safety incident reports, internal DOE and independent external evaluations, and costs. DOE officials reported that from January 2000 through December 2008, activities to manage hazardous wastes stored in underground tanks and to construct a waste treatment facility have been suspended at least 31 times to address safety concerns or construction quality issues. Federal regulations governing contracts do not require contractors to formally report work stoppages and the reasons for them, and DOE does not routinely collect information on them. As a result, supporting documentation on work stoppages was limited. DOE reported that work stoppages varied widely in duration, with some incidents lasting a few hours, and others lasting 2 years or more. Officials reported that about half the work stoppages resulted from concerns about worker or nuclear safety and included proactive safety "pauses," which typically were brief and taken to address an unsafe condition that could potentially harm workers. The remainder of the work stoppages occurred to address concerns about construction quality at the waste treatment plant. Under the terms of the cost-reimbursement contracts for managing the tanks and constructing the waste treatment plant, DOE generally pays all costs associated with temporary work stoppages and does not require the contractor to separately track these costs, although DOE and the contractors do track some costs under certain circumstances. For example, the costs for cleaning up, investigating, and implementing corrective actions were collected for a July 2007 hazardous waste spill at one of the tank farms; these costs totaled over $8 million. The contractors, too, can face financial consequences, such as reduction in earned fee or fines and penalties assessed by DOE or outside regulators. For example, DOE may withhold payment of a performance award, called a fee, from contractors for failure to meet specified performance objectives or to comply with applicable environmental, safety, and health requirements. For the majority of DOE's reported work stoppages, supporting documentation was not available to evaluate whether better oversight or regulation could have prevented them. For 2 of 31 work stoppages where some information was available--specifically, accident investigations or prior GAO work--inadequate oversight contributed to the work stoppages. For example, the accident investigation report for the tank farm spill found that oversight and design reviews by DOE's Office of River Protection failed to identify deficiencies in the tanks' pump system design, which did not meet nuclear technical safety requirements. Similarly, in 2006, GAO found that DOE's failure to effectively implement nuclear safety requirements contributed substantially to schedule delays and cost growth at Hanford's waste treatment plant. With regard to regulations, however, officials from DOE, the Defense Nuclear Facilities Safety Board, and DOE's Office of Inspector General said they did not believe that insufficient regulation was a factor in these events. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Medicare Part D program offers beneficiaries an outpatient prescription drug benefit through various plan sponsors who offer coverage through drug plans, which may vary in terms of their benefits and costs. Enrollment in Part D consists of several steps and requires coordination among various organizations, such as CMS, plan sponsors, and SSA. If beneficiaries are not satisfied with certain aspects of the Part D program, they may file a complaint with CMS, a grievance with their respective plan sponsors, or they can file with both. CMS oversees the complaints and grievances processes and may rely on complaints and grievances data to undertake compliance actions against specific plan sponsors. The Medicare Part D benefit is provided through private organizations— such as health insurance companies—that offer one or more drug plans with different levels of premiums, deductibles, and cost sharing. Part D plan sponsors offer outpatient prescription drug coverage either through stand-alone prescription drug plans (PDPs) for those in traditional fee-for- service Medicare, or through Medicare Advantage prescription drug (MA-PD) plans for beneficiaries enrolled in Medicare’s managed care program, known as Medicare Advantage. In 2007, CMS entered into more than 600 individual contracts with about 250 plan sponsors to provide Part D benefits. Under these contracts, PDP sponsors offered about 1,900 individual plan benefit packages and sponsors of MA-PDs offered about 1,700. The majority of Part D enrollees, about 70 percent, were enrolled in PDPs during this time. Enrollment across contracts varies widely, and is highly concentrated—the 4 largest contracts accounted for nearly 40 percent of total Part D enrollment in 2007. Beneficiaries enroll in the Part D program when they first become eligible for Medicare or during an annual coordinated election period and, once enrolled in a drug plan, typically have one opportunity each year to change their plan selection. Processing a Part D enrollment involves multiple, timely, and accurate electronic data exchanges among federal agencies, private health plans, and pharmacies. For instance, data exchanges occur between plan sponsors and CMS to verify benefit eligibility. Pharmacies rely on this information to ensure that payments for beneficiaries filling their prescriptions are processed appropriately. During the enrollment process, beneficiaries choose one of three options for paying their share of their Part D premiums—direct billing, automated withdrawal from financial accounts, or automatic deductions from social security payments, called premium withholds. As of January 2008 about 20 percent of Part D enrollees—4.8 million beneficiaries—opted to have premiums withheld from their social security payments, which requires coordination among plan sponsors, CMS, and SSA. When a beneficiary elects this option, CMS provides enrollment and payment information it receives from plan sponsors to SSA for processing. SSA then deducts premium amounts from beneficiaries’ monthly social security payments and provides CMS with information on the amount of premiums it deducted in order for CMS to pay the appropriate plan sponsors. Beneficiaries can express dissatisfaction with any aspect of the Part D program, other than coverage determinations, by filing a complaint with CMS or filing a grievance directly with their respective plan sponsors (see fig. 1). The processes for resolving complaints and grievances are independent of one another and the status of individual complaints and grievances is tracked separately. Although CMS encourages beneficiaries to first file a grievance with their respective plan sponsors, a beneficiary can choose to seek resolution by directly contacting CMS first to file a complaint or by filing a complaint and grievance simultaneously. Beneficiaries typically file complaints by calling CMS’s 1-800-Medicare toll-free number or by contacting one of CMS’s 10 regional offices through telephone, fax, mail, or e-mail. For complaints filed through the toll-free number, customer service representatives (CSRs) enter details about the complaints into the 1-800-Medicare database, and assign the complaint to specific contracts administered by plan sponsors. CSRs also categorize the complaint in several ways, including by (a) the nature of the complaint using 20 categories and over 180 subcategories, such as whether the complaint relates to enrollment, pricing, or customer service; and (b) the complaint’s issue level or level of urgency, which corresponds to one of three issue levels-—immediate need, urgent, or routine—depending on the beneficiary’s risk of exhausting his or her medication supply while resolution of the complaint is pending. The information included in the 1-800-Medicare database is uploaded each day into the CTM—CMS’s centralized database of complaints information. For complaints filed with the CMS regional offices, regional staff similarly categorize complaints by their nature and issue level and input them directly into the CTM. Most complaints in the CTM are assigned to specific contracts administered by plan sponsors who utilize their own staff to resolve beneficiaries’ concerns. For complaints beyond the control of plan sponsors, such as those involving premium withholding and certain enrollment issues, plan sponsors request, through the CTM, that CMS resolve the complaint. Once complaints are resolved, the resolution date must be entered into the CTM. CMS requires that immediate need complaints be resolved within 2 calendar days, and encourages that urgent and routine complaints be resolved within 10 and 30 calendar days respectively. According to CMS policy, beneficiaries should be notified once their complaints are resolved. Beneficiaries also have the right to express dissatisfaction by filing a grievance directly with their plan sponsors via telephone, fax, mail, or e-mail. Plan sponsors enter information about the grievances in their internal tracking systems and assign individual grievances to their staff, who work to resolve them. Plan sponsors are required to resolve grievances within 30 days, but can allow for a 14-day extension in some cases. Plan sponsors must inform beneficiaries of the outcome of the grievances process, and beneficiaries who are dissatisfied may choose to file a complaint with CMS on the same issue. CMS is responsible for overseeing the Part D program, which includes overseeing the complaints and grievances processes and ensuring that beneficiaries’ problems are addressed. To oversee the complaints process, CMS staff monitor data within the CTM, including calculating complaint rates and resolution times for each Part D contract administered by a plan sponsor. Specifically, CMS monitors resolution time frames to determine whether plan sponsors resolve complaints assigned to their contracts within applicable time frames. To aid its oversight of the grievances process, CMS requires plan sponsors to categorize grievances into 1 of 11 categories, which differ from CTM categories, and submit quarterly reports for each of their contracts on the number of grievances by category (see app. I). CMS uses these data to calculate grievance rates to identify plan sponsors with outlier contracts. According to CMS officials, the agency can initiate a range of actions against plan sponsors it determines have noncompliant processes (see fig. 2). For example, CMS can make a formal compliance call to plan sponsors to discuss identified issues. However, if CMS’s monitoring indicates that plan sponsors are not taking corrective actions in response to the compliance call, CMS may pursue more stringent compliance actions. For example, the agency may send formal written notices of noncompliance, which notify plan sponsors of their noncompliance and explicitly inform them that they must address the problems. For plan sponsors that remain noncompliant, CMS can send warning letters that notify plan sponsors that their performance is unacceptable; request that plan sponsors submit written corrective action plans that show formal plans to come into compliance; or audit the plan sponsors. In the most extreme cases of noncompliance, CMS can impose intermediate sanctions, which include suspension of enrollment, payment, or marketing activities. CMS can also impose a civil monetary penalty or terminate or decline to renew a Part D contract. Most complaints related to enrollment issues and while both the number of complaints and the time needed to resolve them decreased as the Part D program matured, ongoing challenges continued to pose problems for some beneficiaries. The majority of complaints were related to delays and errors in processing beneficiaries’ enrollment and disenrollment requests and were resolved. In addition, a small proportion of complaints involved cases where beneficiaries were at risk of depleting their medication supplies. Further, trends in complaints data suggest that beneficiaries reported fewer complaints over time and their problems were resolved more quickly as they, plan sponsors, and CMS gained experience with the Part D benefit. However, the complaints data also revealed some ongoing challenges facing the program, including problems related to data system coordination between CMS and plan sponsors and between CMS and SSA, which continued to present difficulties for some beneficiaries. During the 18-month period from May 1, 2006, through October 31, 2007, 629,792 complaints were filed with CMS—an average monthly complaint rate of 1.5 complaints per 1,000 beneficiaries. The majority of complaints—about 63 percent—were related to problems beneficiaries experienced when trying to enroll in or disenroll from a plan, and about 21 percent were related to pricing and coinsurance issues. The remaining 15 percent of complaints were spread among the other 18 CTM categories, and included complaints related to customer service and marketing of plans (see fig. 3). The vast majority—about 73 percent of the enrollment and disenrollment complaints, or 290,000 complaints—were assigned to five CTM subcategories and were related to delays and errors in processing beneficiaries’ enrollment or disenrollment requests. According to CMS officials, such problems occurred when enrollment records between CMS and plan sponsors differed or contained errors, and thus extra time was needed for CMS and plan sponsors to identify and correct the errors and ensure beneficiaries were enrolled in their plans of choice. Approximately 47,000 (or more than 35 percent) of the complaints that were categorized as pricing and coinsurance issues were related to beneficiaries who experienced problems having their premiums automatically deducted from their social security payments. Specifically, these complaints included cases in which the wrong amounts were deducted from beneficiaries’ social security payments, the correct amounts were being deducted but were not forwarded to the appropriate plan sponsor for payment, or premiums had not yet been deducted when beneficiaries expected otherwise. According to CMS officials, many of the complaints related to accurately deducting premiums and forwarding payments to plan sponsors were due to problems with data exchanges between CMS and SSA. In addition, CMS officials indicated that beneficiaries are not always aware that it can take several months for SSA to process a request for premium deductions; therefore, they may file complaints when premiums are not immediately deducted from their social security payments. Many of the remaining pricing and coinsurance complaints were filed because some beneficiaries complained they were charged too high of a coinsurance amount for their prescriptions. In addition to complaint categories, the CTM also contains information on the “issue level” of complaints (immediate need, urgent, routine), and the dates complaints were filed and resolved. We found that about 73 percent of complaints were unrelated to beneficiaries at risk of depleting their supplies of medication and were considered routine. About 20 percent of complaints were considered immediate need, meaning beneficiaries had between 0 and 2 days of medication remaining, and about 7 percent of complaints were considered urgent, meaning beneficiaries had 3 to 14 days of medication remaining. Further, using CTM dates, we found that 99 percent of all complaints filed between May 2006 and October 2007 were resolved, on average, in 25 days. Although immediate need and urgent complaints were resolved, on average, much more quickly—12 days for immediate need complaints and 16 days for urgent complaints—these average resolution times still exceeded CMS’s resolution time frames. Finally, we found that 44 percent of all complaints involved issues, such as those related to premium deductions from social security payments, which were beyond the control of plan sponsors, and thus required CMS intervention for resolution. When compared to complaints that plan sponsors could resolve independently, these complaints took, on average, twice as long—34 days compared to 17 days-—-to resolve. According to CMS officials, the lengthier resolution times for complaints requiring CMS intervention reflected the fact that these complaints were often related to delays associated with reconciling data between the agency and plan sponsors or SSA. Trends in the complaints data indicate that beneficiaries reported fewer problems and their problems were resolved more quickly. For example, while the average monthly complaint rate was 1.5 per 1,000 beneficiaries during the period, the monthly complaint rate declined by 74 percent from its peak of 2.86 complaints per 1,000 beneficiaries in May 2006 to .73 in October 2007 (see fig. 4). In addition, the average time needed to resolve beneficiaries’ complaints declined by 73 percent, from a peak of 33 days in July 2006 to 9 days in October 2007 (see fig. 5). The decline in average resolution time for complaints CMS resolved during this period was even more pronounced, falling from 51 days to 11 days. According to CMS officials, the decline in monthly complaint rates and average resolution times reflected improved implementation of the Part D program since the initial election period, and improved familiarity of the program among beneficiaries, plan sponsors, and CMS itself. While trends in the complaints data highlighted declines in the monthly complaint rate and average resolution times, they also revealed some ongoing challenges facing the program. Specifically, the data confirmed information-processing issues related to beneficiaries’ requests for enrollment changes and automatic premium withholds from their Social Security payments remained. For example, despite the trend in the overall complaint rate discussed earlier and as shown in figure 4, the complaint rate nearly doubled, from .72 in December 2006 to 1.40 in January 2007. This was due largely to a spike in the number of complaints related to delays or errors when CMS and plan sponsors processed beneficiaries’ enrollment and disenrollment requests following the end of the 2007 annual coordinated election period. More specifically, according to CMS officials this increased complaint rate was due largely to the sheer volume of transactions processed during this time each year. The officials told us that while they expect to continue to see an increase in complaints each year following the annual coordinated election period, they expect the magnitude of such increases to diminish as the program matures. In addition, the general trend of increasing complaint rates from January 2007 through May 2007 reflected increasing numbers of complaints related to beneficiaries’ requests for automatic withholding of premiums that can occur when beneficiaries elect to change plans. According to CMS officials, the timing of when SSA processes the premium withhold request may affect the accuracy of the deduction, and result in complaints. For example, as required by law, SSA must process cost-of-living adjustments for beneficiaries’ social security payments on an annual basis, and according to SSA, they begin this processing in November of each year. To process these adjustments for recipients who are also enrolled in Part D and have chosen the premium withholding option, SSA must rely on CMS enrollment information to determine the amount to deduct for Part D premiums. However, because beneficiaries may have elected to change plans during the Part D annual coordinated election period, which runs from November 15 through December 31 of each year, SSA’s calculations may not account for premium differences related to beneficiaries’ subsequent enrollment changes. CMS officials indicated that there is no easy solution to the data coordination and timing issues between CMS and SSA at the root of this problem. However, CMS and SSA have formed several work groups to identify improvements, including improved data system exchanges, which could help reduce complaints related to this issue. In the interim, CMS has undertaken outreach efforts to plan sponsors and beneficiaries to inform them of potential delays related to requests for automatic premium withholds, letting them know that such requests may take several months to process. Finally, while we found that CMS and plan sponsors resolved complaints, including immediate need and urgent complaints, more quickly as the Part D program matured, a substantial percentage of such complaints were not resolved within CMS’s time frames. Specifically, during the period from May 2006 through October 2007, 53 percent of immediate need complaints (66,001) and 27 percent of urgent need complaints (10,476) were not resolved within the applicable time frames. Further, progress in meeting the time frames, particularly for immediate need cases, largely stagnated from March 2007 to October 2007, as the proportion of cases not meeting the time frame hovered around 30 percent each month (see fig. 6). Grievances data reported by plan sponsors for their contracts contained limitations and anomalies and did not yield sufficient insight into beneficiaries’ experiences with Part D. In contrast to the data CMS collects on complaints, CMS only requires plan sponsors to submit quarterly reports on the total number of grievances they received in 11 CMS-defined categories for each of their Part D contracts. Therefore, CMS does not have information about whether a grievance is related to a beneficiary’s medication supply or whether it was ultimately resolved. As a result, we were unable to determine the extent to which beneficiaries’ grievances related to medication supply issues, the extent to which plan sponsors were resolving grievances, or whether they were resolving them in a timely manner. In addition to their limited nature, we identified a number of anomalies in the grievances data that raise questions about their accuracy and usefulness in drawing conclusions about beneficiaries’ experiences with Part D. Among these anomalies, we found that grievances were concentrated in a small number of contracts, and at a rate that was significantly disproportionate to their respective enrollments, raising questions about whether plan sponsors were reporting grievances data for their contracts in a comprehensive and consistent manner. For example, in 2006 plan sponsors reported grievances data for 522 contracts, 19 of which accounted for 80 percent of all grievances but only 49 percent of enrollment. The concentration was more pronounced in 2007, when 11 of the 604 contracts for which grievances data were reported accounted for 90 percent of all grievances but only 42 percent of enrollment. We also found significant variations in the number of grievances reported for contracts with similar levels of enrollment, and in the number of grievances filed between 2006 and 2007. For example, in 2006, while the two largest contracts each averaged about 3 million enrollees, one contract had more than 140,000 grievances, for an average monthly grievance rate of 4.22 per 1,000 beneficiaries, while the other contract had fewer than 4,000 grievances, for a grievance rate of .11 per 1,000 beneficiaries. In addition, in contrast to the decline in the monthly complaint rate that we identified, available data show an increase in the average monthly grievance rate between 2006 and 2007. Specifically, while a total of 310,215 grievances were reported in 2006, for an average monthly grievance rate of 1.23 per 1,000 beneficiaries, there were a total of 726,440 grievances reported for the first 3 quarters of 2007 alone, for a rate of 3.38 per 1,000 beneficiaries. We found that this variation was predominately due to differences in the number of grievances reported for three contracts, which had a total of 70 grievances for 2006, and 495,961 for the first 3 quarters of 2007, despite having nearly identical levels of total enrollment in each year. Finally, the proportion of grievances assigned to categories varied significantly between 2006 and 2007, a change that is inconsistent with trends in the complaints data. For example, while over 60 percent of the 2006 grievances were assigned to the enrollment and disenrollment category—a percentage generally similar to the complaints data filed with CMS—they assigned approximately 5 percent of the 2007 grievances to this category. In commenting on a draft of this report, CMS indicated that the variation between the two years was likely due to data collection issues that existed during the early implementation of Part D. For example, CMS suggested that the grievances data reported by plan sponsors in 2006 included nongrievances or erroneously categorized grievances in the enrollment and disenrollment category. While CMS has a systematic oversight process for complaints, it lacks a similar oversight framework for plan sponsor-reported grievance processes. To oversee the complaints process, CMS has established a framework consisting of several key elements, which include standard operating policies and procedures and a centralized repository of complaints data, and staff that routinely review and assess the complaints data and take actions against plan sponsors it determines have noncompliant processes. In contrast to complaints, CMS’s oversight of plan sponsors’ grievances processes has been more limited. CMS developed guidance for classifying grievances, required plan sponsors to report summary grievances data for each of their Part D contracts, and periodically reviewed these data. However, limitations in these oversight elements have resulted in plan sponsors reporting incomplete and inconsistent data to CMS, and there is little assurance that beneficiaries’ grievances are resolved or that they are resolved in a consistent fashion. To ensure a level of consistency in how complaints are tracked and resolved, CMS developed standard operating procedures for both its caseworkers and plan sponsors. These procedures provide guidance on how complaints should be entered into the CTM as well as how caseworkers and plan sponsors should resolve them. For example, CMS’s guidance includes requirements to enter key dates for each complaint, such as the dates complaints were filed and resolved, and information about how individual complaints should be categorized by their nature and issue level. Specifically, CMS’s guidance to plan sponsors provides information about how they can utilize the CTM to access, review, and document case resolution, or request CMS assistance in the event they are unable to achieve resolution. Through its guidance, CMS has been able to ensure consistency in terms of the information the CTM contains about each complaint. Further, it has allowed the agency to create, through the CTM, a reliable source of data from which it can monitor the complaints process. CMS also dedicated significant resources to ensure that beneficiaries’ complaints are addressed. Specifically, CMS officials estimated that several hundred staff members throughout the agency have some responsibility for the oversight of the complaints process. For example, some regional staff members are responsible for reviewing plan sponsors’ case notes included in the CTM to verify their resolution of complaints or for directly resolving complaints beyond the control of plan sponsors. In addition, other CMS staff members routinely analyze CTM data to identify trends in complaint rates and track issues related to the performance of individual plan sponsors, such as resolution times. For example, on a quarterly basis, CMS staff members analyze complaint rates for individual contracts both by overall complaints and by three CTM categories, and then compare complaint rates among contracts. Based on this comparison, CMS staff assign a star rating to each contract. Further, CMS has dedicated staff in the Office of the Medicare Beneficiary Ombudsman (OMO) who utilize complaints data to identify systemic problems affecting the implementation of Part D. When OMO staff identify problems, such as those related to delays in processing enrollment requests and withholding premiums from Social Security payments, they alert high-level CMS managers, who in turn are responsible for initiating corrective actions. CMS officials informed us that the agency may rely on a variety of actions, ranging from formal compliance calls to the termination of a plan sponsor’s Part D contract when it identifies a plan sponsor that is noncompliant with requirements for the complaints process. CMS officials indicated that their use of such actions has been limited because informal conference calls with plan sponsors have frequently been sufficient to correct problems identified through complaints. For example, although CMS officials said that they would require plan sponsors with contracts that received a one or two star rating for 2 consecutive quarters to submit a business plan describing how they would improve their performance, they have never had to do so because their informal calls to such plan sponsors have thus far been sufficient to correct problems. However, in some cases, CMS has taken more stringent actions. For example, as of February 2008, CMS had issued 144 notices of noncompliance and 22 warning letters, and initiated 3 audits against plan sponsors that did not meet their contractual performance requirement to resolve 95 percent of immediate need complaints within 2 days.45, 46 Additionally, CMS had not terminated any plan sponsors’ Part D contract or levied civil monetary penalties in response to issues related to compliance with the complaints process. To determine compliance with the performance requirement, CMS measures the number of days that have elapsed between the date the complaint was assigned to the contract and when it was resolved. CMS officials noted that they will consider developing additional performance requirements, such as a requirement related to complaint rates, in the future. However, the officials noted that they would want to examine data trends from at least a 3-year period before doing so. medication. CMS also does not have a mechanism to verify that plan sponsors have effectively resolved complaints. While CMS caseworkers review plan sponsors’ notes in the CTM, they do not routinely take a sample of complaints and follow up with beneficiaries to validate the plan sponsors’ resolution actions. CMS officials indicated that the agency does not have the resources to perform such a comprehensive check and stated that beneficiaries who are dissatisfied with their plan sponsor’s resolution could file another complaint directly with CMS. In contrast to complaints, CMS’s oversight of plan sponsor grievances processes has been more limited. CMS provided plan sponsors with general guidance for determining whether beneficiaries’ problems were grievances or coverage determinations, which are addressed through a separate process. CMS also provided plan sponsors with time frames for resolving grievances, periodically reviewed plan sponsor grievances data, and began auditing plan sponsors’ grievances processes in 2007. However, although CMS’s guidance to plan sponsors included examples of how they could classify beneficiaries’ problems, several plan sponsors we interviewed said that this guidance was not detailed enough and raised concerns about whether plan sponsors were accurately differentiating among inquiries (i.e., general questions about the Part D program), grievances, or coverage determinations. CMS officials acknowledged that some plan sponsors have incorrectly classified inquiries as grievances. Further, in its 2007 audits of plan sponsors’ grievances processes, CMS found numerous cases where plan sponsors did not correctly differentiate between grievances and coverage determinations, supporting plan sponsors’ concerns about the adequacy of the existing guidance. Such confusion about how to classify grievances increases the likelihood that plan sponsors report erroneous or inconsistent information to CMS and that they rely on the wrong processes to address beneficiaries’ concerns. CMS does not require plan sponsors to report certain information on grievances for each of their Part D contracts, such as resolution dates, that is essential for determining whether beneficiaries’ grievances are being resolved, and devotes few resources to reviewing what plan sponsors have reported for their contracts. Instead, on a quarterly basis, each plan sponsor reports the total number of grievances for 11 categories for each of its contracts. CMS officials also could not explain many of the anomalies we identified in the grievances data, such as substantial variation in the enrollment category from 2006 to 2007 and considerable variation in the grievance rates between contracts with similar levels of enrollment. Further, they acknowledged that they had not undertaken efforts to review the data in detail or to assess their overall reliability. In fact, more than a year into the program, CMS officials were still uncertain as to whether grievances had been reported for all contracts, and as of May 2008, agency analysis was limited to calculating annual grievance rates for each contract that did report grievances. CMS officials recognized that their efforts to oversee the grievances process have been limited, as they have chosen to focus their attention on other oversight issues such as appeals and coverage determinations and have devoted resources to program implementation issues, such as enrollment of dual-eligible beneficiaries. In the event that plan sponsors are not properly responding to beneficiaries’ grievances, CMS officials stated that the issues could be resolved through the complaints process. Therefore, by focusing its attention largely on complaints, the agency expressed confidence that plan sponsors are addressing beneficiaries’ issues. While the agency strongly believes in providing plan sponsors the latitude to implement their individual grievances processes, CMS expects to devote more resources to the oversight of grievances processes as the program matures. January 1, 2006, marked a new era in the Medicare program as the federal government began offering outpatient prescription drug coverage to eligible Medicare beneficiaries. The program is currently in its third year of operation, and millions of individuals have chosen to enroll. While trends in complaints data suggest that CMS and plan sponsors have improved program operations over time, lingering operational issues continue to pose challenges to some beneficiaries. This has hindered their ability to enroll in their plans of choice, have their premiums accurately deducted from their social security payments, or ensure that their problems related to critical medication supply issues are resolved in a timely manner. While CMS is taking action to address some of these operational issues related to complaints, its continued effort to address these operational challenges will be key to achieving further improvement. Furthermore, CMS does not have reliable grievances data to identify problems and needed improvements and ultimately ensure that beneficiaries’ concerns are addressed. This is particularly important given that CMS encourages beneficiaries to utilize the grievances process as their first line of redress when trying to resolve problems. Without reliable grievances data, CMS cannot ensure that plan sponsors are fulfilling their obligations and provide a full assessment of beneficiaries’ experiences with the program. To improve oversight of the Medicare Part D grievances process, and provide added assurance that beneficiaries’ grievances are being resolved, we recommend that CMS undertake efforts to improve the consistency, reliability, and usefulness of grievances data reported by plan sponsors for each of their contracts. Such efforts include enhancing its existing guidance for determining whether beneficiaries’ problems are grievances, requiring plan sponsors to report information regarding the status and issue level of grievances, and conducting systematic oversight of these data. We provided a draft of this report for comment to the Administrator of CMS. In its written comments (see app. II.), CMS remarked that our report did an “impressive job” describing the complex processes employed to monitor complaints and grievances regarding Medicare Part D. The agency concurred with the report’s recommendation to undertake efforts to improve the consistency, reliability, and usefulness of grievances data reported by plan sponsors for each of their contracts, and highlighted steps it already has taken to implement it. CMS took issue with the report’s conclusion that its oversight activities were focused almost exclusively on resolving complaints with little attention devoted to plan sponsors’ grievances processes, and noted that it felt some information, such as details concerning attestations made as part of sponsors’ Part D applications, had been omitted from our report. In addition to these comments, CMS provided detailed, technical comments that we incorporated as appropriate. Consistent with the recommendation to improve the consistency, reliability, and usefulness of grievances data, CMS noted that it has been working to provide Part D sponsors with more comprehensive guidance, enhance its oversight activities, and undertake corrective actions as needed. CMS stated that it recently provided guidance to plan sponsors regarding statutory definitions of grievances, coverage determinations, and appeals to facilitate accurate reporting of these data to CMS. For example, CMS cited its 2008 Reporting Requirements Technical Specifications, released this spring, as part of its efforts to further educate plan sponsors about the differences between coverage determinations and grievances. CMS further stated that it would consider adding data elements related to plan sponsors’ timeliness and quality of grievances resolution to its calendar year 2010 Reporting Requirements. CMS took issue with the report’s conclusion that its oversight activities were focused almost exclusively on resolving complaints with little attention devoted to plan sponsors’ grievances processes. The agency noted that it considered this conclusion misleading and felt it did not appropriately weigh all components of CMS’s oversight of plan sponsors’ grievances processes, such as plan sponsor audits, which include a review of grievances processes. In addition, CMS noted that the report did not consider a component of the Part D application, in which sponsors must attest that they will establish and maintain grievances processes in accordance with federal regulations. Finally, while agreeing with the report’s statement that the average resolution time for immediate need and urgent complaints exceeded CMS’s required time frames, CMS noted that its analysis of more recent complaints data demonstrated that case resolution time frames had improved and were trending towards CMS’s standard time frames. We recognize that CMS has audited the grievances processes of some plan sponsors, and the report highlighted key findings from these audits. While we believe CMS can rely on such audits to improve its oversight in the future, the agency did not begin auditing plan sponsors until 2007, and has yet to audit a number of plan sponsors. Further, while we recognize the attestation component of the application requirement, we believe that such attestations provide only limited assurance that beneficiaries’ grievances are being resolved appropriately. We do not believe CMS will be able to ensure that plan sponsors are abiding by their statements until CMS audits the grievances processes of all plan sponsors. Finally, we did not evaluate CMS’s findings on resolution time frames from its more recent data, because the data CMS used to conduct their analyses of resolution time frames were from a time frame beyond the scope of our work. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Kathleen King at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Susan Anthony, Assistant Director; Jennie Apter; Shirin Hormozi; David Lichtenfeld; and Jennifer Whitworth made key contributions to this report. Beneficiaries and providers (including pharmacies and physicians) can file complaints with the Centers for Medicare and Medicaid Services (CMS) regarding Medicare Part D. Within the Complaint Tracking Module (CTM), beneficiary complaints are assigned to 14 categories and provider complaints to 6 categories, which are further delineated into 186 subcategories. CMS requires that plan sponsors report grievances based on 11 CMS-defined categories, which are somewhat similar to the CTM categories, but do not include subcategories. A description of the complaints and grievances categories is listed below. | Medicare Part D coverage is provided through plan sponsors that contract with the Centers for Medicare & Medicaid Services (CMS). As of April 2008, about 26 million beneficiaries were enrolled in Part D. When beneficiaries encounter problems with Part D, they can either file a complaint with CMS or a grievance with their plan sponsors. CMS centrally tracks complaints data and plan sponsors must report summary data on grievances for each of their contracts. GAO provided information on (1) complaints and what they indicate about beneficiaries' experiences with Part D, (2) whether grievances data provide additional insight about beneficiaries' experiences, and (3) CMS's oversight of the complaints and grievances processes. To conduct its work, GAO reviewed CMS's complaints and grievances data and interviewed the plan sponsors of eight, judgmentally selected contracts, which accounted for 40 percent of 2006 enrollment. While the number of complaints filed with CMS and the time needed to resolve them has diminished as the Part D program has matured, complaints data indicate that ongoing challenges pose problems for some beneficiaries. From May 1, 2006, through October 31, 2007, about 630,000 complaints were filed; most complaints were related to problems in processing beneficiaries' enrollment and disenrollment requests. The monthly complaint rate declined by 74 percent over the period, and the average time needed to resolve complaints decreased from a peak of 33 days to 9 days. However, trends in the complaints data also indicate ongoing implementation issues, such as information-processing issues related to beneficiaries' requests for enrollment changes and automatic premium withholds from Social Security payments. In addition, CMS and plan sponsors did not resolve a significant proportion of complaints related to beneficiaries at risk of depleting their medications in accordance with applicable time frames. Due to limitations and anomalies, the grievances data that plan sponsors reported for their contracts did not provide sufficient insight into beneficiaries' experiences with Part D. Specifically, these data did not include information about whether beneficiaries who filed grievances were at risk of depleting their medications or whether plan sponsors were resolving grievances in a timely manner. In addition, GAO identified a number of anomalies in the grievances data, raising questions about whether plan sponsors were reporting these data consistently and accurately. For example, reported grievances were concentrated in a small number of plan sponsors' contracts and at a rate that was significantly disproportionate to their respective enrollment levels; varied considerably among contracts with similar levels of enrollment; and increased from 2006 to 2007, in contrast to patterns in complaints data. CMS's oversight efforts thus far have focused almost exclusively on resolving complaints with little attention devoted to plan sponsors' grievances processes. CMS routinely monitors the status of complaints and has taken actions against plan sponsors who failed to comply with requirements for the complaints process. In contrast, CMS oversight of plan sponsor grievances processes has been more limited. CMS provided plan sponsors with general guidance for classifying grievances and periodically reviewed these data. However, several plan sponsors indicated that the guidance was insufficient, increasing the likelihood that plan sponsors report erroneous and inconsistent information to CMS and that they rely on the wrong processes to address beneficiaries' concerns. Further, CMS could not explain many of the anomalies in the grievances data that GAO identified. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Higher Education Act of 1965, as amended, defined an HBCU as a school that, among other things, was established before 1964 and is accredited or pre-accredited by a nationally recognized accrediting agency or association. The official list of schools that qualify as HBCUs is published in 34 C.F.R. 608.2(b). A map depicting the locations of the 103 HBCUs and a list of schools by state is in appendix II. HBCUs may have historic properties and may have them listed on the National Register of Historic Places. The National Historic Preservation Act of 1966 authorized the National Register of Historic Places, the official list of the nation’s districts, sites, buildings, structures, and objects significant in American history, architecture, archeology, engineering, and culture. The National Register, administered by NPS, is part of a program to identify, evaluate, and protect the nation’s cultural resources. Properties may be nominated for inclusion on the National Register by states and federal agencies. State nominations, which may be prepared by local citizens, are submitted to a state review board, which makes an approval/disapproval recommendation to the state historic preservation officer (SHPO). If the SHPO approves the nomination, it is forwarded to NPS to be considered for listing. If the nomination is approved by NPS, the property is officially entered on the National Register. In addition to their role in the nomination process, the SHPOs are responsible for surveying and evaluating properties within their states that they believe are eligible for the National Register. The National Register’s criteria for evaluating properties include a determination that the property is significant in American history, architecture, archeology, engineering, and culture and that it possesses integrity of location, design, setting, materials, workmanship, feeling, and association. In addition, at least one of the following must be present for the property to be considered historic: (1) have an association with historic events or activities; (2) have an association with the lives of people significant in the nation’s past; (3) have distinct characteristics of a type, period, or method of construction; be the work of a master; have high artistic values; or be a significant, distinguishable entity; or (4) have yielded, or may likely yield, information important about prehistory or history. In addition, the property generally has to be 50 years of age or more. The National Historic Preservation Act also established a program to provide matching grants to the states and other entities for the preservation and protection of properties on the National Register. Since the act went into effect in 1966, NPS has provided $4.3 million in grants appropriated by the Congress to HBCUs for restoring historic properties. In addition, the Congress authorized $29 million under the Omnibus Parks and Public Lands Management Act of 1996 to fund the restoration of historic properties at selected HBCUs. As of December 1, 1997, $4 million has been appropriated for this purpose. Historic properties that are either on the National Register or have been determined eligible for listing on the National Register as a result of SHPO surveys are eligible for federal grant assistance under the National Historic Preservation Act or the Omnibus Parks and Public Lands Management Act of 1996. Most of the 712 historic properties are at a small number of HBCUs and are mostly buildings rather than structures, sites, or objects. About half of the historic properties identified are already on the National Register of Historic Places. The other half are either eligible for the National Register on the basis of SHPO surveys or considered historic by the HBCUs. About 66 percent of the 712 properties identified in our survey were located at 28 schools, each having 10 or more properties. Seventeen schools had no historic properties. These were mostly schools that were created or relocated to other campuses less than 50 years ago and thus were schools that did not have properties eligible to be considered as historic. Table 1 groups the schools according to how many historic properties they reported and shows the number and percentage of the total properties each group had. Historic properties are classified as buildings, structures, sites, or objects. A building may be, for example, a dormitory, gymnasium, house, chapel, or other construction created principally to shelter any form of human activity. A structure is distinguished from a building in that it is used for purposes other than human shelter, for example, a tower, smokestack, or gazebo. A site refers to a location of a significant event or historic occupation or activity where the location itself possesses the historic, cultural, or archeological value. Examples of sites include courtyards, gardens, and cemeteries. Objects are primarily artistic in nature or are relatively small in scale and simply constructed, such as a sculpture, bell, monument, or statue. The photos in figure 1 show examples of these types of properties. On the basis of our survey, 94.4 percent of the historic properties at the HBCUs were buildings. The remaining 5.6 percent were structures, sites, or objects. Figure 2 shows the type of properties and percentage of each type. Of the 712 properties respondents identified, 323, or 45.4 percent, were listed on the National Register. These properties have been evaluated and approved for listing by NPS in accordance with the established National Register criteria. Of the remaining properties, 206, or 28.9 percent, were identified through surveys and evaluations completed by the SHPOs but had not yet been nominated to the National Register. The remaining 183 properties, or 25.7 percent, were identified by the schools as historic but were not on the National Register and had not been surveyed or assessed by the SHPOs. In the schools’ opinion, these properties would be eligible for the National Register if they were surveyed and assessed by a SHPO and nominated to the National Register. Figure 3 shows the percentage and number of properties listed on the National Register, surveyed and assessed by the SHPOs, or identified by the schools as historic but not included in either of the other two categories. Identified by schools, not on the National Register and not surveyed by SHPOs (183) The schools estimated that the restoration of the 712 historic properties would cost about $755 million. Most of the estimated restoration cost comes from fewer than half of the schools, and about half of the cost is for properties listed on the National Register. Schools have funds set aside to cover less than a tenth of the estimated restoration costs. In their estimates of the cost to restore the 712 historic properties—which totaled $755 million—the schools provided a wide range of figures for individual properties. The wide range can be attributed to (1) whether a property was recently restored and the condition of those not restored and (2) the size of the area needing restoration. Over 90 percent of the total cost ($681.2 million) was associated with 44 of the schools. The cost to restore all the properties at each of these schools ranged from $5 million to over $20 million. Table 2 shows the number and percentage of schools that fall within various cost ranges and the total costs and percentage of total costs within those ranges. As shown in table 2, 18 of the 103 schools did not have any restoration costs associated with the historic properties. Of these, 17 had no properties, and therefore, no cost. One school had two properties and, as both of them had been recently renovated, no additional funds for restoration were needed, according to the school. Some of the 85 schools with properties having restoration costs also had one or more properties that the schools estimated had no restoration costs. The reasons given by these schools for not identifying any costs included that (1) the property had been recently restored and no additional funds were needed and (2) the property did not need any restoration. For the most part, properties that did not need any restoration were buildings. Of the estimated $755 million needed to restore the 712 properties, $356.7 million was for properties listed on the National Register; $239.1 million was for properties eligible for the National Register on the basis of SHPO surveys and assessments; and $159.2 million was for properties identified by the schools as historic but not included in either of the previous two categories. It should be noted that properties that are not listed on the National Register and that have not been surveyed by the SHPO and assessed to be eligible for listing on the National Register currently are not eligible for federal grant assistance under existing legislation. Therefore, $595.8 million of the $755 million is currently eligible for federal grant assistance. Figure 4 shows the restoration costs of properties by category. Identified by schools, not on the National Register and not surveyed by SHPOs ($159.2) Of the estimated $755 million needed to restore the properties, 36 schools reported that $60.4 million, about 8 percent, had already been set aside to pay the restoration costs for specific properties. As shown in figure 5, of the total set aside, $22.3 million, or 36.9 percent, was from federal sources; $23.8 million, or 39.3 percent, was from state funding sources; and $11.1 million, or 18.4 percent, was from private funding sources. The remaining $3.2 million, or 5.3 percent, was from sources such as a university fund. The $60.4 million set aside was for the restoration of 58 properties at 36 schools. For 32 of these properties, the amount of the set-aside was the full amount needed to cover the total estimated restoration costs. For the remaining 26 properties, the set-aside covered a portion of the total restoration costs. The schools used different, but common, methods to calculate restoration costs. These methods were an original feasibility report, an updated feasibility report, a contractor’s quotation or proposal, a cost-estimating guidebook, a cost-per-square-foot calculation, and a Consumer Price Index inflator. If the schools used other methods, we asked them to explain what they were. An original or updated feasibility report is typically prepared by an architectural or engineering firm and generally describes what is feasible to restore and how much the work would cost. A contractor’s quotation or proposal is an estimate prepared by a contractor to restore a property for the stated price or bid. A cost-estimating guidebook is a reference guide prepared by the architectural engineering industry that gives probable restoration costs by the type of work to be done, such as roof repair, and the materials used. The cost-per-square-foot method uses the industry’s average restoration cost for a locality multiplied by the number of square feet that need to be restored. A Consumer Price Index inflator is a percentage increase each year based on the inflation rate; this method is used to adjust estimates that have already been prepared. Generally, the most comprehensive methods of estimating restoration costs would be an original or updated feasibility report, followed by a contractor’s quotation or proposal. The cost-estimating guidebook, cost-per-square-foot, and Consumer Price Index inflator methods are generally less accurate because they represent guidance, or averages, rather than estimates on specific properties. Even though an original or updated feasibility report is most likely to be more accurate, some of the schools we visited stated that the cost of paying architect or engineering firms or contractors to provide such estimates was prohibitive and that the estimates could not be completed in the time necessary to respond to our survey. As a result, some schools used other methods, such as the cost-per-square-foot method, to avoid incurring excessive costs and still meet our survey deadline. It should be pointed out that estimating the amount of restoration and preservation needed can be a very complex undertaking. As a result, estimates, even those done by professionals under the best of circumstances, cannot be exact. Many restoration problems, particularly those involving major repairs or renovations, are not visible to the naked eye and may not be uncovered until the restoration actually takes place. In addition, cost estimates to restore and preserve properties are just that, estimates, and are subject to revisions until the work is completed. We asked the schools to identify whether one or a combination of methods was used in calculating their estimates. Of the 673 properties that had restoration costs (39 properties had none), 474 had estimates made using a single method, such as the cost-per-square-foot method. For 199 properties, a combination of methods was used. As shown in figure 6, the cost-per-square-foot method was the predominant single method used. (24) (43) (55) (23) (304) (0) (25) Methods and number of times used When more than one method was used, many different combinations occurred. These included, for example, using (1) an original feasibility study estimate with a Consumer Price Index inflator; (2) a cost-estimating guidebook with a Consumer Price Index inflator; and (3) a contractor’s quotation with a cost-estimating guidebook and cost-per-square-foot calculations. We asked the schools to identify whether the preparers of their cost estimates were (1) outside architect/engineering firms, (2) in-house architects/engineers, (3) contractors, (4) school building/maintenance supervisors, or (5) other types of individuals or firms. Typically outside architect/engineering firms prepare original and updated feasibility reports, contractors prepare quotations or proposals, and in-house architects/engineers or building/maintenance supervisors prepare estimates using the guidebooks and cost-per-square-foot method. All of these types of preparers can and will use the Consumer Price Index inflator to adjust previous cost estimates. As shown in figure 7, the estimates of the restoration costs were primarily prepared by in-house architects/engineers, followed by outside architect/engineering firms and in-house building/maintenance supervisors. The information in this report represents the most comprehensive data collected to date on the number of historic properties at HBCUs and the estimated costs to restore those properties. However, the cost estimates presented in this report are based on self-reported data and are subject to limitations. Furthermore, as previously pointed out, estimating the amount of restoration and preservation needed can be a very complex undertaking. As a result, estimates, even those done by professionals under the best of circumstances, cannot be exact. However, these data are a useful starting point for determining the total restoration requirements at HBCUs. We provided copies of a draft of this report to the Department of the Interior for its review and comment. The Department commented that highly significant properties on the campuses of historically black colleges and universities are important national historic treasures worthy of care and attention. The Department, however, noted that the magnitude of the repair cost estimates reported by the schools is substantial in terms of the limited level of appropriations available from the Historic Preservation Fund for matching grants to state historic preservation officers and Indian tribes, and the grants available to historically black colleges and universities pursuant to section 507 of the Omnibus Parks and Public Lands Management Act of 1996. The Department also pointed out that funding for increased appropriations for grants to historically black colleges and universities would be subject to authorization and the budgetary controls imposed under the Omnibus Budget Enforcement Act of 1990, as amended. We agree with the Department that there are budgetary limitations that must be addressed when considering the restoration of historic properties at the schools. The Department concurred with our discussion of the methodologies used by the schools in estimating the cost to restore historic properties. It noted that the restoration cost estimates may include some work that would not conform to the Secretary’s Standards for the Treatment of Historic Properties—such as sandblasting brick, which would cause the degeneration of the historic materials and appearance. Thus, not all work included in the estimates may be eligible for federal assistance. The Department agreed that the cost of preservation work on historic properties can escalate beyond initial estimates because the need for some major repairs may not be uncovered until the restoration actually begins. Interior’s comments and our responses are in appendix IV. Our study included all HBCUs defined under the Higher Education Act of 1965, as amended, and listed in 34 C.F.R. 608.2(b). As of June 1, 1997, there were 103 such schools. To gather background data and to develop and pretest a standardized data collection instrument (survey) for our study, we visited 12 HBCUs in North Carolina, South Carolina, and Virginia. To determine the number of historic properties on the campuses of the 103 HBCUs, we used three sources. First, we obtained a list of historic properties on the National Register of Historic Places from NPS, including properties (buildings, structures, sites, and objects) within historic districts on the National Register and properties that contribute to the historic significance of the districts. Second, in conjunction with NPS, we contacted each of the 22 SHPOs in whose jurisdictions the 103 HBCUs were located. NPS sent a letter to each of the 22 SHPOs explaining the nature of our study and provided them with lists of the HBCUs in their states as well as the historic properties in the National Register database. NPS also provided us with each SHPO contact. We asked each of the SHPOs to verify the National Register list as of June 1, 1997, or to submit corrected information. NPS used the SHPOs’ information to update the National Register as warranted. We also asked each SHPO to provide a list of properties at each HBCU that would be eligible for the National Register as a result of the surveys and evaluations that the SHPO conducted at the HBCUs prior to June 1, 1997, but that had not been nominated. Third, we sent each of the 103 HBCUs a survey that included (1) a list of its properties that the SHPO had verified were on the National Register and (2) a list of its properties that the SHPO had told us were eligible for the National Register on the basis of its surveys and evaluations. We asked each HBCU to verify the existence of these properties, to delete properties that no longer existed or that the school never or no longer owned, and to add properties that the school believed met the criteria to be eligible for the National Register. Because the data from the SHPOs were as of June 1, 1997, we asked the schools to provide their data as of June 1, 1997. A copy of the survey sent to each of the 103 HBCUs is in appendix III. To determine the estimated restoration costs for the historic properties, we asked each HBCU to provide a cost estimate to restore each property identified. We requested that the estimate include only capital improvement costs and not normal day-to-day operating and maintenance costs. We also requested that the capital improvement costs include only costs after June 1, 1997. In other words, if an HBCU was restoring a property, expenditures prior to June 1, 1997, were not to be included in the estimate. Each HBCU decided on the extent of restoration needed in making its estimates. We did not independently verify the accuracy of the cost estimates the HBCUs submitted. We did, however, ask the schools to provide information on the methods they used to estimate the costs, such as whether the estimates were based on feasibility studies, contractors’ bids, or cost-per-square-foot calculations. We also asked the schools to provide the names and credentials of the preparers of the cost estimates, such as whether the preparers were professional architect/engineering firms, contractors, or school building/maintenance supervisors. We conducted our study from April 1997 through January 1998 in accordance with generally accepted government auditing standards. Some of the historic property data and all of the estimated cost data for the restoration and preservation of the historic properties presented in this report are based on self-reported data from the HBCUs. The accuracy of the results contained in this report is affected by the extent to which the respondents accurately reported the number of historic properties at their schools and the estimated costs to restore and preserve these properties. Also, according to NPS officials, the estimates may include costs for work that does not meet the Secretary of the Interior’s Standards for the Treatment of Historic Properties, particularly if the individuals preparing the estimates are not familiar with those standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of the Interior; the Secretary of Education; the Director, National Park Service; and the Director, Office of Management and Budget. We will also make copies available to others upon request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix V. (continued) (continued) (continued) A cost of $0 means that the property was restored prior to June 1, 1997. A “ ” in the cost column means that, because there were zero properties owned, there was no associated cost of restoration. Section 1. According to the National Register of Historic Places, the properties listed in the table below are on your campus and are listed on the National Register as of June 1, 1997. (b) (c) (d) Is the property owned and/or still existing? How was the estimate calculated? Principal preparer of latest cost estimate. Estimated total cost to preserve & restore. Please provide only the estimated cost to be spent after 6/1/97. Please provide only capital expenditure costs and not operating and maintenance costs. 1=Yes, both owned and still exists (Enter 1 below, then complete b, c, and d of this section) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 2=Yes, both owned and exists, but will be destroyed 3= Never or no longer owned, but still exists 4= No longer exists (Please describe what happened to the property) Of the total cost estimated below, specify the source and amount of funds, if any, that have been set aside from each source. 1=Original feasibility report 2=Updated feasibility report 3=Actual contractor quotations or contractor proposals 4=Cost estimating guidebook 5=Cost per square foot 6=Consumer Price Index (CPI) inflator 7=Other (specify below) a. Name b. Title c. Telephone number d. Credentials of preparer (Enter code listed below) 1=Outside architect/engineering firm 2=In-house or school architect/engineer 3=Contractor - other than architect/ engineering firm 4=School building/maintenance supervisor 5=Other (Specify) (If 2, 3 or 4, then enter code below and STOP Do not complete columns b, c and d for these properties) Total cost to preserve & restore (in thousands) : $_______,_______,000 (Check all that apply) ____ (Enter code) Funds set aside to preserve & restore from: (If none, enter 0.) d. _____ (Enter code from above) Section 2. The properties listed in the table below are not listed on the National Register as of June 1, 1997, but are structures that a State Historic Preservation Officer (SHPO) has assessed and identified as being eligible for listing but has not yet been listed on the Register. (b) (c) (d) Is the property owned and/or still existing? How was the estimate calculated? Principal preparer of latest cost estimate. Estimated total cost to preserve & restore. Please provide only the estimated cost to be spent after 6/1/97. Please provide only capital expenditure costs and not operating and maintenance costs. 1=Yes, both owned and still exists (Enter 1 below, then complete b, c, and d of this section) - - - - - - - - - - - - - - - - - 2=Yes, both owned and exists, but will be destroyed 3= Never or no longer owned, but still exists 4= No longer exists (Please describe what happened to the property) Of the total cost estimated below, specify the source and amount of funds, if any, that have been set aside from each source. 1=Original feasibility report 2=Updated feasibility report 3=Actual contractor quotations or contractor proposals 4=Cost estimating guidebook 5=Cost per square foot 6=Consumer Price Index (CPI) inflator 7=Other (specify below) a. Name b. Title c. Telephone number d. Credentials of preparer (Enter code listed below) (If 2, 3 or 4, then enter code below and STOP Do not complete columns b, c and d for these properties) 1=Outside architect/engineering firm 2=In-house or school architect/engineer 3=Contractor - other than architect/ engineering firm 4=School building/maintenance supervisor 5=Other (Specify) Total cost to preserve & restore (in thousands) : $_______,_______,000 (Check all that apply) ____ (Enter code) Funds set aside to preserve & restore from: (If none, enter 0.) 3. ___ 6. ___ d. _____ (Enter code from above) Section 3. Please list any other properties that have not been identified in either Section 1 or 2. (b) (c) (d) Is the property owned and/or still existing? How was the estimate calculated? Principal preparer of latest cost estimate. Estimated total cost to preserve & restore. Please provide only the estimated cost to be spent after 6/1/97. Please provide only capital expenditure costs and not operating and maintenance costs. 1=Yes, both owned and still exists (Enter 1 below, then complete b, c, and d of this section) - - - - - - - - - - - - - - - - - - 2=Yes, both owned and exists, but will be destroyed 3= Never or no longer owned, but still exists 4= No longer exists (Please describe what happened to the property) Of the total cost estimated below, specify the source and amount of funds, if any, that have been set aside from each source. 1=Original feasibility report 2=Updated feasibility report 3=Actual contractor quotations or contractor proposals 4=Cost estimating guidebook 5=Cost per square foot 6=Consumer Price Index (CPI) inflator 7=Other (specify below) a. Name b. Title c. Telephone number d. Credentials of preparer (Enter code listed below) (If 2, 3 or 4, then enter code below and STOP Do not complete columns b, c and d for these properties) 1=Outside architect/engineering firm 2=In-house or school architect/engineer 3=Contractor - other than architect/ engineering firm 4=School building/maintenance supervisor 5=Other (Specify) Total cost to preserve & restore (in thousands) : $_______,_______,000 (Check all that apply) ____ (Enter code) Funds set aside to preserve & restore from: (If none, enter 0.) 3. ___ 6. ___ d. _____ (Enter code from above) The following are GAO’s comments on the Department of the Interior’s letter dated January 12, 1998. 1. We have referred to the National Register in the “Results in Brief” as “the” official list of properties as suggested. 2. We have revised footnote 1 changing “specific” to “individual” properties, correcting the spelling of “identify,” and changing the phrase “within the district” to “of the district.” 3. We added a sentence to footnote 2 that, according to the Department of the Interior, there currently are National Park Service-approved state historic preservation programs in all states. 4. We revised the text to add “location” and “feeling and association” to the criteria for evaluating properties as suggested. 5. We reworded the definition of a building as suggested. 6. We added text to the report to note that properties that are not listed on the National Register and have not been surveyed by state historic preservation officers and assessed to be eligible for listing on the National Register currently are not eligible for federal grant assistance under existing legislation. As a result, $595.8 million of the $755 million total estimated restoration cost is currently eligible for federal grant assistance. 7. We have made reference to the Department’s concurrence with our discussion of the historically black colleges and universities’ cost-estimating methodologies in the “Agency Comments” section in the body of the report. Doreen Feldman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the: (1) number of historic properties located at historically black colleges and universities (HBCU); and (2) estimated cost to restore and preserve these properties. GAO noted that: (1) all 103 historically black colleges and universities responded to GAO's survey; (2) respondents identified 712 historic properties that were owned by the schools; (3) most of these properties were at a small number of schools and were mostly buildings rather than structures, sites, or objects; (4) about half, 323, of the historic properties identified were already on the National Register of Historic Places--the official list of sites, buildings, structures, and objects significant in American history, architecture, archeology, engineering, and culture; (5) the others were either eligible for the National Register on the basis of state historic preservation officers' surveys or considered historic by the colleges and universities; (6) according to information the schools provided, an estimated $755 million is needed to restore and preserve the 712 properties; (7) the cost estimates include the capital improvement costs to restore and preserve the historic properties, such as making the properties more accessible to people with disabilities, replacing roofs, and removing lead-based paint or asbestos; (8) the respondents were asked not to include routine maintenance costs; (9) some of the schools identified a total of about $60 million in funds that had already been set aside to restore and preserve specific historic properties; and (10) the schools used a number of different methods to calculate the estimated restoration and preservation costs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Handling increasing service workloads is a critical challenge facing SSA. The agency is processing a growing number of claims for Social Security benefits. SSA estimates that it will face continued growth in beneficiaries over the next few decades as the population ages and life expectancies increase. The number of OASI and DI beneficiaries is estimated to increase substantially between calendar years 1997 and 2010—from approximately 44 million to over 54 million. Recognizing constraints on its staff and resources, SSA has moved to better serve its increasing beneficiary population and improve its productivity by redesigning its work processes and modernizing the computer systems used to support these processes. A key aspect of the modernization effort is the agency’s transition from its current centralized mainframe-based computer processing environment to a highly distributed client/server processing environment. IWS/LAN is expected to play a critical role in the modernization by providing the basic automation infrastructure for using client/server technology to support the redesigned work processes and improve the availability and timeliness of information to employees and appropriate users. Under this initiative, SSA plans to replace approximately 40,000 “dumb” terminals and other computer equipment used in over 2,000 SSA and state DDS sites with an infrastructure consisting of networks of intelligent workstations connected to each other and to SSA’s mainframe computers. The national IWS/LAN initiative consists of two phases. During phase I, SSA plans to acquire 56,500 workstations, 1,742 LANs, 2,567 notebook computers, systems furniture, and other peripheral devices. Implementation of this platform is intended to provide employees in the sites with office automation and programmatic functionality from one terminal. It also aims to provide the basic, standardized infrastructure to which additional applications and functionality can later be added. The projected 7-year life-cycle cost of phase I is $1.046 billion, covering the acquisition, installation, and maintenance of the IWS/LAN equipment. Under a contract with Unisys Corporation, SSA began installing equipment for this phase in December 1996; it anticipates completing these installations in June 1999. Through fiscal year 1997, SSA had reported spending approximately $565 million on acquiring workstations, LANs, and other services. Phase II is intended to build upon the IWS/LAN infrastructure provided through the phase I effort. Specifically, during this phase, SSA plans to acquire additional hardware and software, such as database engines, scanners, bar code readers, and facsimile and imaging servers, needed to support future process redesign initiatives and client/server applications. SSA plans to award a series of phase II contracts in fiscal year 1999 and to carry out actual installations under these contracts during fiscal years 1999 through 2001. Currently, SSA is developing the first major programmatic software application to operate on IWS/LAN. This software—the Reengineered Disability System (RDS)—is intended to support SSA’s modernized disability claims process in the new client/server environment. Specifically, RDS is intended to automate and improve the Title II and Title XVI disability claims processes from the initial claims-taking in the field office, to the gathering and evaluation of medical evidence in state DDSs, to payment execution in the field office or processing center and the handling of appeals in hearing offices. In August 1997, SSA began pilot testing RDS for the specific purposes of (1) assessing the performance, cost, and benefits of this software and (2) determining supporting IWS/LAN phase II equipment requirements. Agencies, in undertaking systems modernization efforts, are required by the Clinger-Cohen Act of 1996 to ensure that their information technology investments are effectively managed and significantly contribute to improvements in mission performance. The Government Performance and Results Act of 1993 requires agencies to set goals, measure performance, and report on their accomplishments. One of the challenges that SSA faces in implementing IWS/LAN is ensuring that the planned systems and other resources are focused on helping its staff process all future workloads and deliver improved service to the public. In a letter and a report to SSA in 1993 and 1994, respectively, we expressed concerns about SSA’s ability to measure the progress of IWS/LAN because it had not established measurable cost and performance goals for this initiative. In addition, SSA faces the critical challenge of ensuring that all of its information systems are Year 2000 compliant. By the end of this century, SSA must review all of its computer software and make the changes needed to ensure that its systems can correctly process information relating to dates. These changes affect not only SSA’s new network but computer programs operating on both its mainframe and personal computers. In October 1997, we reported that while SSA had made significant progress in its Year 2000 efforts, it faced the risk that not all of its mission-critical systems will be corrected by the turn of the century. At particular risk were the systems used by state DDSs to help SSA process disability claims. Our objectives were to (1) determine the status of SSA’s implementation of IWS/LAN, (2) assess whether SSA and state DDS operations have been disrupted by the installations of IWS/LAN equipment, and (3) assess SSA’s practices for managing its investment in the IWS/LAN initiative. To determine the status of SSA’s implementation of IWS/LAN, we analyzed key project documentation, including the IWS/LAN contract, project plans, and implementation schedules. We observed implementation activities at select SSA field offices in Alabama, Florida, Georgia, Minnesota, South Carolina, Texas, and Virginia; at program service centers in Birmingham, Alabama, and Philadelphia, Pennsylvania; and at teleservice centers in Minneapolis, Minnesota, and Fort Lauderdale, Florida. In addition, we reviewed IWS/LAN plans and observed activities being undertaken by state DDS officials in Alabama, Georgia, and Minnesota. We also interviewed representatives of the IWS/LAN contractor—Unisys Corporation—to discuss the status of the implementation activities. To assess whether SSA and state DDS operations have been disrupted by the installations of IWS/LAN equipment, we reviewed planning guidance supporting the implementation process, such as the IWS/LAN Project Plan, and analyzed reports summarizing implementation activities and performance results identified during pilot efforts. We also interviewed SSA site managers, contractor representatives, and IWS/LAN users to identify installation and/or performance issues, and observed operations in select SSA offices where IWS/LAN equipment installations had been completed. In addition, we discussed IWS/LAN problems and concerns with DDS officials in 10 states: Alabama, Arkansas, Arizona, Delaware, Florida, Louisiana, New York, Virginia, Washington, and Wisconsin, and with the president of the National Council of Disability Determination Directors, which is a representative body for all state DDSs. To assess SSA’s management of the IWS/LAN investment, we applied our guide for evaluating and assessing how well federal agencies select and manage their investments in information technology resources. We evaluated SSA’s responses to detailed questions about its investment review process that were generated from the evaluation guide and compared the responses to key agency documents generated to satisfy SSA’s process requirements. We also reviewed IWS/LAN cost, benefit, and risk analyses to assess their compliance with OMB guidance. We did not, however, validate the data contained in SSA’s documentation. We performed our work from July 1997 through March 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Social Security or his designee. The Commissioner provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Using a strategy that includes installing workstations and LANs in up to 20 sites per weekend, SSA, through mid-March 1998, had generally met its phase I schedule for implementing IWS/LAN. However, the contractor installing IWS/LAN has expressed concerns about the availability of the workstations specified in the contract, raising questions as to whether they can continue to be acquired. In addition, the pilot effort that SSA began in August 1997 to assess the performance, cost, and benefits of RDS and identify IWS/LAN phase II requirements has experienced delays that could affect the schedule for implementing phase II of this initiative. Under the phase I schedule, 56,500 intelligent workstations and 1,742 LANs are to be installed in approximately 2,000 SSA and state DDS sites between December 1996 and June 1999. The schedule called for approximately 30,500 workstations and about 850 LANs to be installed by mid-March 1998. According to SSA records, the agency generally met this schedule with the actual installation of 31,261 workstations and 850 LANs by March 15, 1998. These installations occurred at 753 SSA sites and 20 DDS sites (covering 12 states and the federal DDS). SSA reported in its fiscal year 1997 accountability report that the number of front-line employees using IWS/LAN increased to 50.2 percent—exceeding by 2.2 percent the fiscal year 1997 Results Act goal. The standard intelligent workstation configuration includes a 100-megahertz Pentium personal computer with 32 megabytes of random access memory, the Windows NT 4.0 operating system, a 1.2-gigabyte hard (fixed) disk drive, 15-inch color display monitor, and 16-bit network card with adaptation cable. Last year the contractor, Unisys, submitted a proposal to upgrade the intelligent workstation by substituting a higher speed processor at additional cost. Unisys noted that it was having difficulty obtaining 100-megahertz workstations. However, SSA did not accept Unisys’ upgrade proposal. Further, the Deputy Commissioner for Systems stated that SSA did not believe it was necessary to upgrade to a faster processor because the 100-megahertz workstation meets its current needs. For its modernization efforts to succeed, SSA must have the necessary workstations to support its processing needs. This is particularly important given the agency’s expressed intent to operate future client/server software applications on IWS/LAN to support redesigned work processes. Adding database engines, facsimile, imaging, and other features like those planned by SSA during phase II of the IWS/LAN initiative could demand a workstation with more memory, larger disk storage, and a processing speed higher than 100 megahertz. Personal computers available in today’s market operate at about three times this speed. Preliminary testing of the RDS software has already shown the need for SSA to upgrade the workstation’s random access memory from 32 megabytes to 64 megabytes. However, systems officials told us that their tests have not demonstrated a need for a faster workstation. As discussed in the following section, SSA is encountering problems and delays in completing its tests of the RDS software. In addition, at the conclusion of our review, SSA had begun holding discussions with Unisys regarding the availability of the 100-megahertz workstations. SSA has experienced problems and delays in the pilot effort that it initiated in August 1997 to assess the performance, cost, and benefits of RDS and identify IWS/LAN phase II requirements. Under the pilot, an early release of the software is being tested in one SSA field office and the federal DDS to acquire feedback from end users regarding its performance. SSA planned to make improvements to the software based on these pilot results and then expand its testing of the software to all SSA and DDS components in the state of Virginia. The results of the pilot testing in Virginia were to be used in determining hardware and software requirements to support IWS/LAN phase II acquisitions, beginning in fiscal year 1999. SSA encountered problems with RDS during its initial pilot testing. For example, systems officials stated that, using RDS, the reported productivity of claims representatives in the SSA field office dropped. Specifically, the officials stated that before the installation of RDS, each field office claims representative processed approximately five case interviews per day. After RDS was installed, each claims representative could process only about three cases per day. At the conclusion of our review, systems officials stated that because the RDS software has not performed as anticipated, SSA has entered into a contract with Booz-Allen and Hamilton to independently evaluate and recommend options for proceeding with the development of RDS. In addition, SSA has delayed expanding the pilot by 9 months—from October 1997 to July 1998. This is expected to further delay SSA’s national roll-out and implementation of RDS. Moreover, because RDS is essential to identifying IWS/LAN phase II requirements, the Deputy Commissioner for Systems has stated that delaying the pilot will likely result in slippages in SSA’s schedule for acquiring and implementing phase II equipment. Nationwide implementation of IWS/LAN is a complex logistical task for SSA, requiring coordination of site preparation (such as electrical wiring and cabling) in over 2,000 remote locations, contractor-supplied and installed furniture and intelligent workstation components, and training of over 70,000 employees in SSA and DDS locations. Moreover, once installed, these systems must be managed and maintained in a manner that ensures consistent and quality service to the public. During our review, staff in the 11 SSA offices that we visited generally stated that they had not experienced any significant disruptions in their ability to serve the public during the installation and operation of IWS/LAN. They attributed the smooth transition to SSA’s implementation of a well-defined strategy for conducting site preparations, equipment installations, and employee training. Part of that strategy required equipment installation and testing to be performed on weekends so that the IWS/LAN equipment would be operational by the start of business on Monday. In addition, staff were rotated through training and client service positions and augmented with staff borrowed from other field offices to maintain service to the public during the post-installation training period. Further, because the new workstations provide access to the same SSA mainframe software applications as did the old terminals and LAN equipment, staff were able to process their workloads in a similar manner as with the previous environment. State DDSs generally were less satisfied with the installation and operation of IWS/LAN in their offices. Administrators and systems staff in the 10 DDS sites that we visited expressed concerns about the loss of network management and control over IWS/LAN operations in their offices and dissatisfaction with the service and technical support received from the contractor following the installation of IWS/LAN equipment. In particular, SSA initially planned to centrally manage the operation and maintenance of IWS/LAN equipment. However, DDS officials in 7 of the 10 offices expressed concern that with SSA managing their networks and operations, DDSs can no longer make changes or fixes to their equipment locally and instead, must rely on SSA for system changes or network maintenance. Eight of the 10 DDSs reported that under this arrangement, the IWS/LAN contractor had been untimely in responding to certain of their requests for service, resulting in disruptions to their operations. For example, a DDS official in one state told us that at the time of our discussion, she had been waiting for approximately 2 weeks for the IWS/LAN contractor to repair a hard disk drive in one of the office’s workstations. In January 1998, the National Council of Disability Determination Directors (NCDDD), which represents the state DDSs, wrote to SSA to express the collective concerns of the DDSs regarding SSA’s plan to manage and control their IWS/LAN networks. NCDDD recommended that SSA pilot the IWS/LAN equipment in one or more DDS office to evaluate options for allowing the states more flexibility in managing their networks. It further proposed that IWS/LAN installations be delayed for states whose operations would be adversely affected by the loss of network control. At least one state DDS—Florida—refused to continue with the roll-out of IWS/LAN in its offices until this issue is resolved. Because IWS/LAN is expected to correct Year 2000 deficiencies in some states’ hardware, however, NCDDD cautioned that delaying the installation of IWS/LAN could affect the states’ progress in becoming Year 2000 compliant. At the conclusion of our review, the Deputy Commissioner for Systems told us that SSA had begun holding discussions with state officials in early March 1998 to identify options for addressing the states’ concerns about the management of their networks under the IWS/LAN environment. Federal legislation and OMB directives require agencies to manage major information technology acquisitions as investments. In implementing IWS/LAN, SSA has followed a number of practices that are consistent with these requirements, such as involving executive staff in the selection and management of the initiative and assessing the cost, benefits, and risks of the project to justify its acquisition. However, SSA’s practices have fallen short of ensuring full and effective management of the investment in IWS/LAN because it did not include plans for measuring the project’s actual contributions to improved mission performance. According to the Clinger-Cohen Act and OMB guidance, effective technology investment decision-making requires that processes be implemented and data collected to ensure that (1) project proposals are funded on the basis of management evaluations of costs, risks, and expected benefits to mission performance and (2) once funded, projects are controlled by examining costs, the development schedule, and actual versus expected results. These goals are accomplished by considering viable alternatives, preparing valid cost-benefit analyses, and having senior management consistently make data-driven decisions on major projects. SSA followed an established process for acquiring IWS/LAN that met a number of these requirements. For example, senior management reviewed and approved the project’s acquisition and has regularly monitored the progress of the initiative against competing priorities, projected costs, schedules, and resource availability. SSA also conducted a cost-benefit analysis to justify its implementation of IWS/LAN. This analysis was based on comparisons of the time required to perform certain work tasks before and after the installation of IWS/LAN equipment in 10 SSA offices selected for a pilot study during January through June 1992. For example, the pilot tested the time savings attributed to SSA employees not having to walk from their desks or wait in line to use a shared personal computer. Based on the before and after time savings identified for each work task, SSA projected annual savings from IWS/LAN of 2,160 workyears that could be used to process growing workloads and improve service. In a review of the IWS/LAN initiative in 1994, the Office of Technology Assessment (OTA) found SSA’s cost-benefit analysis to be sufficient for justifying the acquisition of IWS/LAN. Although SSA followed certain essential practices for acquiring IWS/LAN, it has not yet implemented performance goals and measures to assess the impact of this investment on productivity and mission performance. Under the Clinger-Cohen Act, agencies are to establish performance measures to gauge how well their information technology supports program efforts and better link their information technology plans and usage to program missions and goals. Successful organizations rely heavily upon performance measures to operationalize mission goals and objectives, quantify problems, evaluate alternatives, allocate resources, track progress, and learn from mistakes. Performance measures also help organizations determine whether their information systems projects are really making a difference, and whether that difference is worth the cost. The Clinger-Cohen Act also requires that large information technology projects be implemented incrementally and that each phase should be cost effective and provide mission-related benefits. It further requires that performance measures be established for each phase to determine whether expected benefits were actually achieved. In our September 1994 report, we noted that as part of an effort with the General Services Administration (GSA) to develop a “yardstick” to measure the benefits that IWS/LAN will provide the public, SSA had initiated actions aimed at identifying cost and performance goals for IWS/LAN. SSA identified six categories of performance measures that could be used to determine the impact of IWS/LAN technology on service delivery goals and reengineering efforts. It had planned to establish target productivity gains for each measure upon award of the IWS/LAN contract. GSA was to then use these measures to assess IWS/LAN’s success. As of March 1998, however, SSA had established neither the target goals to help link the performance measures to the agency’s strategic objectives nor a process for using the measures to assess IWS/LAN’s impact on agency productivity and mission performance. In addition, although the Clinger-Cohen Act and OMB guidance state that agencies should perform retrospective evaluations after completing an information technology project, SSA officials told us that they do not plan to conduct a post-implementation review of the IWS/LAN project once it is fully implemented. According to the Director of the Information Technology Systems Review Staff, SSA currently does not plan to use any of the measures to assess the project’s impact on agency productivity and mission performance because (1) the measures had been developed to fulfill a specific GSA procurement requirement that no longer exists and (2) it believes the results of the pilots conducted in 1992 sufficiently demonstrated the savings that will be achieved with each IWS/LAN installation. It is essential that SSA follow through with the implementation of a performance measurement process for each phase of the IWS/LAN effort. Measuring performance is necessary to show how this investment is contributing to the agency’s goal of improving productivity. Among leading organizations that we have observed, managers use performance information to continuously improve organizational processes, identify performance gaps, and set improvement goals. The performance problems that SSA has already encountered in piloting software on IWS/LAN make it even more critical for SSA to implement performance measures and conduct post-implementation reviews for each phase of this initiative. SSA believes that the results of its pilot effort undertaken in 1992 to justify the acquisition of IWS/LAN sufficiently demonstrate that it will achieve its estimated workyear savings. However, the pilot results are not an acceptable substitute for determining the actual contribution of IWS/LAN to improved productivity. In particular, although the pilots assessed task savings for specific functions performed in each office, they did not demonstrate IWS/LAN’s actual contribution to improved services gained through improvements in the accuracy of processing or improvements in processing times. In addition, OTA noted in its 1994 review that the relatively small number of pilots may not have adequately tested all the potential problems that could arise when the equipment is deployed at all of SSA’s sites. Further, information gained from post-implementation reviews is critical for improving how the organization selects, manages, and uses its investment resources. Without a post-implementation review of each phase of the IWS/LAN project, SSA cannot validate projected savings, identify needed changes in systems development practices, and ascertain the overall effectiveness of each phase of this project in serving the public. Post-implementation reviews also serve as the basis for improving management practices and avoiding past mistakes. SSA is relying on IWS/LAN to play a vital role in efforts to modernize its work processes and improve service delivery, and it has made good progress in implementing workstations and LANs that are a part of this effort. However, equipment availability and capability issues, problems in developing software that is to operate on the IWS/LAN workstations, and concerns among state DDSs that their equipment will not be adequately managed and serviced by SSA, threaten the continued progress and success of this initiative. Moreover, absent target goals and a defined process for measuring performance, SSA will not be able to determine whether its investment in each phase of IWS/LAN is yielding expected improvements in service to the public. To strengthen SSA’s management of its IWS/LAN investment, we recommend that the Commissioner of Social Security direct the Deputy Commissioner for Systems to immediately assess the adequacy of workstations specified in the IWS/LAN contract, and based on this assessment, determine (1) the number and capacity of workstations required to support the IWS/LAN initiative and (2) its impact on the IWS/LAN implementation schedule; work closely with state DDSs to promptly identify and resolve network management concerns and establish a strategy for ensuring the compliance of those states relying on IWS/LAN hardware for Year 2000 corrections; establish a formal oversight process for measuring the actual performance of each phase of IWS/LAN, including identifying the impact that each IWS/LAN phase has on mission performance and conducting post-implementation reviews of the IWS/LAN project once it is fully implemented. In commenting on a draft of this report, SSA generally agreed with the issues we identified and described actions that it is taking in response to our recommendations to resolve them. These actions include (1) determining remaining IWS/LAN workstation needs, (2) addressing state DDS network management concerns and related Year 2000 compliance issues, and (3) implementing a performance measurement strategy for the IWS/LAN initiative. These actions are important to the continued progress and success of the IWS/LAN initiative, and SSA must be diligent in ensuring that they are fully implemented. In responding to our first recommendation to assess the adequacy of workstations specified in the IWS/LAN contract, SSA stated that it had determined the number of workstations required to complete the IWS/LAN implementation and was working on a procurement strategy and schedule for this effort. SSA also stated that its current tests do not show a need for workstations with a processing speed higher than 100 megahertz. The agency further noted that terms and conditions in the IWS/LAN contract will enable it to acquire a higher powered computer or other technology upgrades when the need arises. As discussed earlier in our report, it is important that SSA have the necessary workstations to support its processing needs in the redesigned work environment. Therefore, as SSA continues its aggressive pace in implementing IWS/LAN, it should take all necessary steps to ensure that it has fully considered its functional requirements over the life of these workstations. Doing so is especially important since SSA has encountered problems and delays in completing tests of the RDS software that are vital to determining future IWS/LAN requirements. Our second recommendation concerned SSA’s working closely with state DDSs to identify and resolve network management concerns and establish a strategy for ensuring the compliance of those states relying on IWS/LAN hardware for Year 2000 corrections. SSA identified various actions, which if successfully implemented, could help resolve DDS concerns regarding network management and the maintenance of IWS/LAN equipment, and facilitate its efforts in becoming Year 2000 compliant. In responding to our final recommendation that it establish a formal oversight process for measuring the actual performance of each phase of IWS/LAN, SSA agreed that performance goals and measures should be prescribed to determine how well information technology investments support its programs and provide expected results. SSA stated that it is determining whether expected benefits are being realized from IWS/LAN installations through in-process and postimplementation assessments. SSA further noted that its planning and budgeting process ensures that it regularly assesses the impact of IWS/LAN on agency productivity and mission performance. However, during the course of our review, SSA could not provide specific information to show how its planning and budgeting process and data on workyear savings resulting from IWS/LAN installations were being used to assess the project’s actual contributions to improved productivity and mission performance. In addition, two of the three measures that SSA identified in its response—the number of IWS/LANs installed per month and existing terminal redeployment and phase-out—provide information that is more useful for assessing the progress of SSA’s IWS/LAN installations and existing terminal redeployment efforts. To ensure that its investments are sound, it is crucial that SSA develop measures to assess mission-related benefits, and use them in making project decisions. We will continue to monitor SSA’s efforts in assessing the benefits of IWS/LAN installations through its in-process and postimplementation assessments and its planning and budgeting process. We are sending copies of this report to the Commissioner of Social Security; the Director of the Office of Management and Budget; appropriate congressional committees; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at [email protected] if you have any questions concerning this report. Major contributors to this report are listed in appendix II. Pamlutricia Greenleaf, Senior Evaluator Kenneth A. Johnson, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Social Security Administration's (SSA) ongoing efforts to implement its intelligent workstation/local area network (IWS/LAN) project, focusing on: (1) determining the status of SSA's implementation of IWS/LAN; (2) assessing whether SSA and state disability determination service (DDS) operations have been disrupted by the installations of IWS/LAN equipment; and (3) assessing SSA's practices for managing its investment in IWS/LAN. GAO noted that: (1) SSA has moved aggressively in installing intelligent workstations and LANs since initiating IWS/LAN acquisitions in December 1996; (2) as of mid-March 1998, it had completed the installation of about 31,000 workstations and 850 LANs, generally meeting its implementation schedule for phase I of the initiative; (3) the contractor that is installing IWS/LAN has expressed concerns about the future availability of the intelligent workstations that SSA is acquiring; (4) problems encountered in developing software intended to operate on IWS/LAN could affect SSA's planned schedule for proceeding with phase II of this initiative; (5) staff in SSA offices generally reported no significant disruptions in their ability to serve the public during the installation and operation of their IWS/LAN equipment; (6) some state DDSs reported that SSA's decision to manage and control DDS networks remotely and the IWS/LAN contractor's inadequate responses to DDS' service calls have led to disruptions in some of their operations; (7) because IWS/LAN is expected to correct year 2000 deficiencies in some states' hardware, delaying the installation of IWS/LAN could affect states' progress in becoming year 2000 compliant; (8) consistent with the Clinger-Cohen Act of 1996 and Office of Management and Budget guidance, SSA has followed some of the essential practices required to effectively manage its IWS/LAN investment; (9) SSA has not established essential practices for measuring IWS/LAN's contribution to improving the agency's mission performance; (10) although the agency has developed baseline data and performance measures that could be used to assess the project's impact on mission performance, it has not defined target performance goals or instituted a process for using the measures to assess the impact of IWS/LAN on mission performance; (11) SSA does not plan to conduct a post-implementation review of IWS/LAN once it is fully implemented; and (12) without targeted goals and a defined process for measuring performance both during and after the implementation of IWS/LAN, SSA cannot be assured of the extent to which this project is improving service to the public or that it is actually yielding the savings anticipated from this investment. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
On November 19, 2002, pursuant to ATSA, TSA began a 2-year pilot program at 5 airports using private screening companies to screen passengers and checked baggage. In 2004, at the completion of the pilot program, and in accordance with ATSA, TSA established the SPP, whereby any airport authority, whether involved in the pilot or not, could request a transition from federal screeners to private, contracted screeners. All of the 5 pilot airports that applied were approved to continue as part of the SPP, and since its establishment, 21 additional airport applications have been accepted by the SPP. In March 2012, TSA revised the SPP application to reflect requirements of the FAA Modernization Act, enacted in February 2012. Among other provisions, the act provides the following: Not later than 120 days after the date of receipt of an SPP application submitted by an airport operator, the TSA Administrator must approve or deny the application. The TSA Administrator shall approve an application if approval would not (1) compromise security, (2) detrimentally affect the cost-efficiency of the screening of passengers or property at the airport, or (3) detrimentally affect the effectiveness of the screening of passengers or property at the airport. Within 60 days of a denial, TSA must provide the airport operator, as well as the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Homeland Security of the House of Representatives, a written report that sets forth the findings that served as the basis of the denial, the results of any cost or security analysis conducted in considering the application, and recommendations on how the airport operator can address the reasons for denial. All commercial airports are eligible to apply to the SPP. To apply, an airport operator must complete the SPP application and submit it to the SPP Program Management Office (PMO), as well as to the FSD for its airport. Figure 1 illustrates the SPP application process. Although TSA provides all airports with the opportunity to apply for participation in the SPP, authority to approve or deny the application resides in the discretion of the TSA Administrator. According to TSA officials, in addition to the cost-efficiency and effectiveness considerations mandated by FAA Modernization Act, there are many other factors that are weighed in considering an airport’s application for SPP participation. For example, the potential impacts of any upcoming projects at the airport are considered. Once an airport is approved for SPP participation and a private screening contractor has been selected by TSA, the contract screening workforce assumes responsibility for screening passengers and their property and is required to adhere to the same security regulations, standard operating procedures, and other TSA security requirements followed by federal screeners at non-SPP airports. TSA has developed guidance to assist airport operators in completing their SPP applications, as we recommended in December 2012. Specifically, in December 2012, we reported that TSA had developed some resources to assist SPP applicants, but it had not provided guidance on its application and approval process to assist airports. As it was originally implemented in 2004, the SPP application process required only that an interested airport operator submit an application stating its intention to opt out of federal screening as well as its reasons for wanting to do so. In 2011, TSA revised its SPP application to reflect the “clear and substantial advantage” standard announced by the Administrator in January 2011. Specifically, TSA requested that the applicant explain how private screening at the airport would provide a clear and substantial advantage to TSA’s security operations. At that time, TSA did not provide written guidance to airports to assist them in understanding what would constitute a “clear and substantial advantage to TSA security operations” or TSA’s basis for determining whether an airport had met that standard. As previously noted, in March 2012 TSA again revised the SPP application in accordance with provisions of the FAA Modernization Act, which became law in February 2012. Among other things, the revised application no longer included the “clear and substantial advantage” question, but instead included questions that requested applicants to discuss how participating in the SPP would not compromise security at the airport and to identify potential areas where cost savings or efficiencies may be realized. In December 2012, we reported that while TSA provided general instructions for filling out the SPP application as well as responses to frequently asked questions (FAQ), the agency had not issued guidance to assist airports with completing the revised application or explained to airports how it would evaluate applications given the changes brought about by the FAA Modernization Act. For example, neither the application instructions nor the FAQs addressed TSA’s SPP application evaluation process or its basis for determining whether an airport’s entry into the SPP would compromise security or affect cost-efficiency and effectiveness. Further, in December 2012, we found that airport operators who completed the applications generally stated that they faced difficulties in doing so and that additional guidance would have been helpful. For example, one operator stated that he needed cost information to help demonstrate that his airport’s participation in the SPP would not detrimentally affect the cost-efficiency of the screening of passengers or property at the airport and that he believed not presenting this information would be detrimental to his airport’s application. However, TSA officials at the time said that airports do not need to provide this information to TSA because, as part of the application evaluation process, TSA conducts a detailed cost analysis using historical cost data from SPP and non-SPP airports. The absence of cost and other information in an individual airport’s application, TSA officials noted, would not materially affect the TSA Administrator’s decision on an SPP application. Therefore, we reported in December 2012 that while TSA had approved all applications submitted since enactment of the FAA Modernization Act, it was hard to determine how many more airports, if any, would have applied to the program had TSA provided application guidance and information to improve transparency of the SPP application process. Specifically, we reported that in the absence of such application guidance and information, it may be difficult for airport officials to evaluate whether their airports are good candidates for the SPP or determine what criteria TSA uses to accept and approve airports’ SPP applications. We concluded that clear guidance for applying to the SPP could improve the transparency of the application process and help ensure that the existing application process is implemented in a consistent and uniform manner. Thus, we recommended that TSA develop guidance that clearly (1) states the criteria and process that TSA is using to assess whether participation in the SPP would compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport, (2) states how TSA will obtain and analyze cost information regarding screening cost-efficiency and effectiveness and the implications of not responding to the related application questions, and (3) provides specific examples of additional information airports should consider providing to TSA to help assess an airport’s suitability for the SPP. TSA concurred with our recommendation and, in January 2014, we reported that TSA had taken actions to address it. Specifically, TSA updated its SPP website in December 2012 by providing (1) general guidance to assist airports with completing the SPP application and (2) a description of the criteria and process the agency will use to assess airports’ applications to participate in the SPP. While the guidance states that TSA has no specific expectations of the information an airport could provide that may be pertinent to its application, it provides some examples of information TSA has found useful and that airports could consider providing to TSA to help assess their suitability for the program. Further, the guidance, in combination with the description of the SPP application evaluation process, outlines how TSA plans to analyze and use cost information regarding screening cost-efficiency and effectiveness. The guidance also states that providing cost information is optional and that not providing such information will not affect the application decision. As we reported in January 2014, these actions address the intent of our recommendation. In our December 2012 report, we analyzed screener performance data for four measures and found that there were differences in performance between SPP and non-SPP airports, and those differences could not be exclusively attributed to the use of either federal or private screeners. The four measures we selected to compare screener performance at SPP and non-SPP airports were Threat Image Projection (TIP) detection rates; recertification pass rates; Aviation Security Assessment Program (ASAP) test results; and Presence, Advisement, Communication, and Execution (PACE) evaluation results (see table 1). For each of these four measures, we compared the performance of each of the 16 airports then participating in the SPP with the average performance for each airport’s category (X, I, II, III, or IV), as well as the national performance averages for all airports for fiscal years 2009 through 2011. As we reported in December 2012, on the basis of our analyses, we found that, generally, screeners at certain SPP airports performed slightly above the airport category and national averages for some measures, while others performed slightly below. For example, at SPP airports, screeners performed above their respective airport category averages for recertification pass rates in the majority of instances, while at the majority of SPP airports that took PACE evaluations in 2011, screeners performed below their airport category averages. For TIP detection rates, screeners at SPP airports performed above their respective airport category averages in about half of the instances. However, we also reported in December 2012 that the differences we observed in private and federal screener performance cannot be entirely attributed to the type of screeners at an airport, because, according to TSA officials and other subject matter experts, many factors, some of which cannot be controlled for, affect screener performance. These factors include, but are not limited to, checkpoint layout, airline schedules, seasonal changes in travel volume, and type of traveler. We also reported in December 2012 that TSA collects data on several other performance measures but, for various reasons, the data cannot be used to compare private and federal screener performance for the purposes of our review. For example, passenger wait time data could not be used because we found that TSA’s policy for collecting wait times changed during the time period of our analyses and that these data were not collected in a consistent manner across all airports. We also considered reviewing human capital measures such as attrition, absenteeism, and injury rates, but did not analyze these data because TSA’s Office of Human Capital does not collect these data for SPP airports. We reported that while the contractors collect and report this information to the SPP PMO, TSA does not validate the accuracy of the self-reported data nor does it require contractors to use the same human capital measures as TSA, and accordingly, differences may exist in how the metrics are defined and how the data are collected. Therefore, we found that TSA could not guarantee that a comparison of SPP and non- SPP airports on these human capital metrics would be an equal comparison. Moreover, in December 2012, we found that while TSA monitored screener performance at all airports, the agency did not monitor private screener performance separately from federal screener performance or conduct regular reviews comparing the performance of SPP and non-SPP airports. Beginning in April 2012, TSA introduced a new set of performance measures to assess screener performance at all airports (both SPP and non-SPP) in its Office of Security Operations Executive Scorecard (the Scorecard). Officials told us at the time of our December 2012 review that they provided the Scorecard to FSDs every 2 weeks to assist the FSDs with tracking performance against stated goals and with determining how performance of the airports under their jurisdiction compared with national averages. According to TSA, the 10 measures used in the Scorecard were selected based on input from FSDs and regional directors on the performance measures that most adequately reflected screener and airport performance. Performance measures in the Scorecard included the TIP detection rate and the number of negative and positive customer contacts made to the TSA Contact Center through e-mails or phone calls per 100,000 passengers screened, among others. We also reported in December 2012 that TSA had conducted or commissioned prior reports comparing the cost and performance of SPP and non-SPP airports. For example, in 2004 and 2007, TSA commissioned reports prepared by private consultants, while in 2008 the agency issued its own report comparing the performance of SPP and non-SPP airports. Generally, these reports found that SPP airports performed at a level equal to or better than non-SPP airports. However, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead, they were using across-the-board mechanisms of both private and federal screeners, such as the Scorecard, to assess screener performance across all commercial airports. We found that In addition to using the Scorecard, TSA conducted monthly contractor performance management reviews (PMR) at each SPP airport to assess the contractor’s performance against the standards set in each SPP contract. The PMRs included 10 performance measures, including some of the same measures included in the Scorecard, such as TIP detection rates and recertification pass rates, for which TSA establishes acceptable quality levels of performance. Failure to meet the acceptable quality levels of performance could result in corrective actions or termination of the contract. However, in December 2012, we found that the Scorecard and PMR did not provide a complete picture of screener performance at SPP airports because, while both mechanisms provided a snapshot of private screener performance at each SPP airport, this information was not summarized for the SPP as a whole or across years, which made it difficult to identify changes in performance. Further, neither the Scorecard nor the PMR provided information on performance in prior years or controlled for variables that TSA officials explained to us were important when comparing private and federal screener performance, such as the type of X-ray machine used for TIP detection rates. We concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory requirement that TSA enter into a contract with a private screening company only if the Administrator determines and certifies to Congress that the level of screening services and protection provided at an airport under a contract will be equal to or greater than the level that would be provided at the airport by federal government personnel. Therefore, we recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance, which would better position the agency to know whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non- SPP airports. TSA concurred with the recommendation, and has taken actions to address it. Specifically, in January 2013, TSA issued its first SPP Annual Report. The report highlights the accomplishments of the SPP during fiscal year 2012 and provides an overview and discussion of private versus federal screener cost and performance. The report also describes the criteria TSA used to select certain performance measures and reasons why other measures were not selected for its comparison of private and federal screener performance. The report compares the performance of SPP airports with the average performance of airports in their respective category, as well as the average performance for all airports, for three performance measures: TIP detection rates, recertification pass rates, and PACE evaluation results. Further, in September 2013, the TSA Assistant Administrator for Security Operations signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the SPP PMO must annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. We believe that these actions address the intent of our recommendation and should better position TSA to determine whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non-SPP airports. Further, these actions could also assist TSA in identifying performance changes that could lead to improvements in the program and inform decision making regarding potential expansion of the SPP. TSA has faced challenges in accurately comparing the costs of screening services at SPP and non-SPP airports. In 2007, TSA estimated that SPP airports would cost about 17 percent more to operate than airports using federal screeners. In our January 2009 report we noted strengths in the methodology’s design, but also identified seven limitations in TSA’s methodology that could affect the accuracy and reliability of cost comparisons, and its usefulness in informing future management decisions. We recommended that if TSA planned to rely on its comparison of cost and performance of SPP and non-SPP airports for future decision making, the agency should update its analysis to address the limitations we identified. TSA generally concurred with our findings and recommendation. In March 2011, TSA provided us with an update on the status of its efforts to address the limitations we cited in our report, as well as a revised comparison of costs for screening operations at SPP and non-SPP airports. This revised cost comparison generally addressed three of the seven limitations and provided TSA with a more reasonable basis for comparing the screening cost at SPP and non-SPP airports. In the update, TSA estimated that SPP airports would cost 3 percent more to operate in 2011 than airports using federal screeners. In March 2011, we found that TSA had also taken actions that partially addressed the four remaining limitations related to cost, but needed to take additional actions or provide additional documentation. In July 2014, TSA officials stated they are continuing to make additional changes to the cost estimation methodology and we are continuing to monitor TSA’s progress in this area through ongoing work. Chairman Hudson, Ranking Member Richmond, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Glenn Davis (Assistant Director), Charles Bausell, Kevin Heinz, Susan Hsu, Tyler Kent, Stanley Kostyla, and Thomas Lombardi. Key contributors for the previous work that this testimony is based on are listed in the products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | TSA maintains a federal workforce to screen passengers and baggage at the majority of the nation's commercial airports, but it also oversees a workforce of private screeners at airports who participate in the SPP. The SPP allows commercial airports to apply to have screening performed by private screeners, who are to provide a level of screening services and protection that equals or exceeds that of federal screeners. This testimony addresses the extent to which TSA (1) provides guidance to airport operators for the SPP application process, (2) assesses and monitors the performance of private versus federal screeners, and (3) compares the costs of federal and private screeners. This statement is based on reports and a testimony GAO issued from January 2009 through January 2014. Since GAO's December 2012 report on the Screening Partnership Program (SPP), the Transportation Security Administration (TSA) has developed guidance for airport operators applying to the SPP. In December 2012, GAO found that TSA had not provided guidance to airport operators on its SPP application and approval process, which had been revised to reflect statutory requirements. Further, airport operators GAO interviewed at the time identified difficulties in completing the revised application, such as obtaining cost information requested in the application. GAO recommended that TSA develop application guidance and TSA concurred. In December 2012, TSA updated its SPP website with general application guidance and a description of TSA's assessment criteria and process. The new guidance addresses the intent of GAO's recommendation. TSA has also developed a mechanism to regularly monitor private versus federal screener performance. In December 2012, TSA officials stated that they planned to assess overall screener performance across all commercial airports instead of comparing the performance of SPP and non-SPP airports as they had done previously. Also in December 2012, GAO reported differences between the performance at SPP and non-SPP airports based on screener performance data. In addition, GAO reported that TSA's across-the-board mechanisms did not summarize information for the SPP as a whole or across years, making it difficult to identify changes in private screener performance. GAO concluded that monitoring and comparing private and federal screener performance were consistent with the statutory provision authorizing TSA to contract with private screening companies. As a result, GAO recommended that TSA develop a mechanism to regularly do so. TSA concurred with the recommendation and in January 2013, issued its SPP Annual Report , which provided an analysis of private versus federal screener performance. In September 2013, TSA provided internal guidance requiring that the report annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. These actions address the intent of GAO's recommendation. TSA has faced challenges in accurately comparing the costs of screening services at SPP and non-SPP airports. In 2007, TSA estimated that SPP airports cost about 17 percent more to operate than airports using federal screeners. In January 2009, GAO noted strengths in TSA's methodology, but also identified seven limitations that could affect the accuracy and reliability of cost comparisons. GAO recommended that TSA update its analysis to address the limitations. TSA generally concurred with the recommendation. In March 2011, TSA described efforts to address the limitations and a revised cost comparison estimating that SPP airports would cost 3 percent more to operate in 2011 than airports using federal screeners. In March 2011, GAO found that TSA had taken steps to address some of the limitations, but needed to take additional actions. In July 2014, TSA officials stated that they are continuing to make additional changes to the cost estimation methodology and GAO is continuing to monitor TSA's progress in this area through ongoing work. GAO has made several recommendations since 2009 to improve SPP operations and oversight, which GAO has since closed as implemented based on TSA actions to address them. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To date, CFO Act financial audits have resulted in greater data reliability and improved financial operations. Under the expanded act, all 24 CFO Act agencies can begin to gain the benefits demonstrated by those agencies that have already successfully undergone full-scale financial audits. This is absolutely critical and will put the federal government on a par with the private sector and state and local governments, which have already made the necessary investment in financial management. There is widespread consensus that the preparation and audit of financial statements has been the primary catalyst to increase the reliability of financial data and improve financial operations. During the past 5 years, due to the CFO Act’s requirement, we have seen audit coverage substantially increase to almost half of the government’s annual gross budget authority. Beginning with fiscal year 1996, due to the expanded CFO Act, audit coverage will expand to cover the entire operations of the 24 CFO Act agencies, which currently account for virtually all of the government’s outlays. Also, agencies are progressing in receiving unqualified audit opinions. In four cases, (the Social Security, General Services, and National Aeronautics and Space Administrations and the Nuclear Regulatory Commission) unqualified opinions were rendered on fiscal year 1994 financial statements covering agencies’ entire operations. These agencies, which covered about 23 percent of the government’s fiscal year 1994 outlays, have demonstrated that preparing auditable financial statements is possible and, with priority and emphasis, can be achieved by the remaining 20 CFO Act agencies as well. In addition, there has been significantly greater commitment by the administration and agencies to effectively implement the CFO Act’s expanded financial statement preparation and audit requirements. For example, OMB made it clear from the outset that it would not grant any waivers, although it has the authority to waive the requirement for fiscal years 1996 and 1997; thus, helping to ensure greater adherence to the statutory timetable. Also, OMB, Treasury, and GAO have been meeting with agency CFOs and IGs to build consensus, and we have generally seen a good commitment being given to preparing and auditing financial statements. For instance, some agencies, such as the Departments of Interior and Education, are on an accelerated schedule to having agencywide financial statements 1 year before the act requires. Several CFOs and IGs have caveated their optimism, however, by the prospects that funding constraints could hold for dampening this momentum and hampering plans for meeting the act’s fiscal year 1996 requirement. It is essential that this time frame be met. As we have discussed in prior testimonies before the Congress, audited financial statements have provided significantly more accurate and useful information on the government’s financial status and its operations. Further, CFO Act financial audits have provided a greater understanding of the extent and nature of the financial control and systems problems facing the government, and a better appreciation for the limited extent to which the Congress and program managers can rely on the information they receive. Effective implementation of the CFO Act’s expanded requirement for audited financial information is essential for more informed decision-making and better accountability in virtually every major aspect of the government’s operations, as the following examples illustrate. In fiscal year 1994, the federal government collected a reported over $1.3 trillion in revenue, primarily from individual and corporate income taxes and import duties, fines, and fees. Reliable financial data are necessary to ensure that the government assesses and collects more of the revenue that is due from these sources. This, however, is not yet the case, as shown by our financial audits at the Internal Revenue Service (IRS) and the U.S. Customs Service. The process of preparing and auditing financial statements for the government’s primary revenue collection agency has surfaced significant problems affecting its operations and credibility. For example, through these audits, it came to light during our first audit of IRS’s financial statements—those for fiscal year 1992—that IRS could not verify or reconcile its $1.3 trillion in reported revenues to its accounting substantiate amounts for various types of taxes reported, such as social security, income, and excise taxes, although the amounts of these taxes are to be separately maintained; reconcile its cash accounts with Treasury’s; substantiate its billions of dollars of gross and net accounts receivables, adequately account for its annual operating funds. To its credit, IRS has made a commitment to institute changes. Through the strong support of the Commissioner, the agency has made important strides to address its far-reaching financial management problems. IRS successfully implemented a new administrative accounting system in fiscal year 1993 that can better account for its more than $7 billion in annual operating funds. It entered into an agreement with the Department of Agriculture’s National Finance Center and now has control over its $5 billion payroll operations, which was lacking at the time of our first audit. It has taken physical inventories of its equipment and is beginning to get full control over these assets. IRS has ongoing efforts, including the use of outside contractors, to resolve its cash reconciliation problems and to strengthen its internal controls over payments. Finally, although necessary systems changes to bring revenue accounting up to reasonable expectations have not been completed, better estimates of collectible delinquent taxes are now being developed as part of the financial statement preparation process so that the Congress will have the information needed to better gauge potential collectibility and to ask questions as to why amounts are not collectible. For example, the audit for fiscal year 1992 disclosed that IRS had $65 billion in delinquent taxes outstanding, not the $110 billion IRS reported and, of the $65 billion, only $19 billion was estimated to be collectible. This type of data would provide a more reliable basis than has been available in the past on the merits of adding collection personnel. The future holds even greater potential. First, IRS is beginning to address the systems issues that will enable it to reliably show by type of tax how much has been actually received and who pays the tax. For example, excise taxes, such as petroleum companies and chemical manufacturers, among others, pay to fund environmental cleanup activities, are to be segregated by type and are used to achieve specific policy goals. But our financial audit showed that IRS’s accounting system does not have this capability. Consequently, whether it be the Superfund Trust Fund or the Highway Trust Fund, a fund may be receiving more or less than it is due. Social security taxes are somewhat different in concept but the problem is the same. Under law, the Social Security Administration (SSA) receives social security taxes based on wage information reported by employers to IRS even if the taxes are ultimately not paid. This results in amounts going to the Social Security Fund from other tax sources, and while the IRS knows that there is a discrepancy, it cannot yet identify that amount so that decisionmakers will know the cost of this policy. As a result of the financial audit, IRS is now working to address these problems. Future systems changes should also result in extending the application of accrual accounting to the tax revenue stream so that IRS and the Congress will have somewhat better information about the taxes IRS should be collecting. Further, because the CFO Act calls for the development of better cost and performance data, IRS will have an opportunity to better justify and manage tax compliance initiatives. For example, over the years, questions have been raised over the amount of revenue to be generated from adding revenue agents or initiating special compliance initiatives. Such questions can only be conclusively answered by improving the basic reliability of IRS’s underlying data. Financial audits of the Customs Service, the government’s second most important revenue collector, revealed problems similar to those at IRS. These problems impaired Customs’ ability to effectively ensure that carriers, importers, and their agents complied with laws intended to ensure fair trade practices and protect the American people from unsafe and illegal imported goods. Further, these audits found that Customs did not adequately ensure that all goods imported into the United States were properly identified and that the related duties, taxes, and fees on imports, reported to be over $21 billion for fiscal year 1993, were properly assessed and collected; have adequate controls to detect and prevent excessive or duplicate have adequate accountability over tons of illegal drugs and millions of dollars of cash and property seized or used in its enforcement efforts; and have adequate controls over the use and reporting of its operating funds. The Commissioner of Customs has expressed a strong commitment to resolve these problems and recognizes that a significant and sustained effort by Customs’ management will be required. Acting on this commitment, Customs has developed and tested nationwide, a new program to reliably measure the trade community’s compliance with trade laws. This program is expected to achieve better overall compliance with trade laws and tighter controls to ensure that the government receives all of the import taxes, duties, and fees to which it is entitled. This information will also help Customs ensure that it is making the best use of its limited inspection and audit resources. Moreover, Customs has developed and applied methodologies for more accurately reporting its collectible accounts receivable. It also reorganized its debt collection unit, formalized its collection procedures, and aggressively pursued collection of old receivables. According to Customs, this effort resulted in collections of over $35 million. Customs also began conducting nationwide physical inventories of its seized assets to improve the safeguards over this property and has taken steps, such as implementing basic reconciliations of records, to ensure more adequate control over the use and reporting of its operating funds. The Department of Defense (DOD) must have accurate financial information and internal controls to manage the Department’s vast resources—over $1 trillion in assets, 3 million military and civilian personnel, and a budget of over $250 billion for fiscal year 1995. Effective financial management is critical to assuring that these resources are productively employed in meeting our nation’s defense objectives. Unfortunately, DOD does not have effective financial management operations and the seriousness of its financial management problems caused us to add it to our high-risk list. No single military service or major component has been able to withstand the scrutiny of a financial statement audit. This failure has serious implications. Good financial management runs deeper than the ability to develop accurate financial records. It is being able to (1) provide managers with visibility and control over inventories, (2) project material needs, and (3) effectively balance scarce resources with critical needs. The CFO Act audits have served as an important catalyst for identifying and focusing management attention on the full extent and scope of the financial problems facing the Department. Since 1990, we and the DOD auditors have made over 350 recommendations to help resolve the financial management weaknesses identified throughout the Department. These audits have consistently identified fundamental deficiencies in DOD’s financial operations. For example, these audit have served to highlight that: As of August 1995, DOD problem disbursements—those for which the Department can not match a disbursement with a related obligation—were reported to be $28 billion—and DOD continues to make hundreds of millions of dollars in overpayments to its contractors. As a result, DOD can not ensure that it does not spend more than it is authorized—a basic fund control responsibility. DOD does not have adequate records or controls over the multibillion dollar investment in government furnished property and equipment. DOD has failed to properly report billions of dollars in potential future liabilities, such as environmental cleanup costs. Further, beginning for fiscal year 1996, the Navy general fund operations will be subject to audit. We reviewed the Navy’s fiscal year 1994 financial reports as a measure of the Navy’s current ability to prepare reliable financial statements. In our pending report, we conclude that, to an even greater extent than the other military services, the Navy is plagued by troublesome financial management deficiencies involving tens of billions of dollars. DOD has recognized the seriousness of its financial management problems and the need to take action. Secretary Perry and Comptroller Hamre have been candid in their assessments of the status of current processes and practices. Further, the Department’s financial reform blueprint—presented in February 1995—offers a good perspective of the corrective actions which must be taken. We believe this plan represents an important first step in committing DOD to real action. As we testified earlier this year, however, very serious management challenges face the Department as it moves to make the blueprint a reality. We recommended that DOD determine what skills are required to ensure that the plan is developed and implemented and to establish an independent, outside board of experts to provide counsel, oversight, and perspective to reform efforts. We are also concerned about the pace of needed improvements at DOD. According to a recent DOD IG report, DOD’s development of new accounting systems will not be completed until the end of fiscal year 1998 and, consequently, DOD’s IG will not be able to render audit opinions on any of the military services’ general fund operations until March 2000 at the earliest. As we testified last month, given the serious and pervasive nature of DOD’s financial management problems, and the need for more immediate progress, the Department needs to consider additional steps to (1) establish a skilled financial management workforce, (2) ensure that financial management systems are capable of producing accurate data, and (3) build an effective financial management organization structure with clear accountability. We will continue to review more detailed implementation plans intended to carry out DOD’s blueprint—including assessments of DOD’s strategy and timing of proposed actions—and to work with DOD on implementing recommended improvements. The federal government is the nation’s largest single source of credit. It lends or guarantees hundreds of billions of dollars of loans for a wide variety of programs, such as housing, farming, education, and small business. At September 30, 1994, the government reported (1) $241 billion in nontax receivables, of which $49 billion, or over 20 percent, was reported to be delinquent and (2) $694 billion in guarantees of outstanding loans for which it was contingently liable. There are four principal credit agencies: the Department of Agriculture, with 56 percent of the loans; the Department of Housing and Urban Development (HUD), with 11 percent of the loans and 55 percent of the guarantees; the Department of Education, with 7 percent of the loans and 11 percent of the guarantees; and the Department of Veterans Affairs, with 23 percent of the guarantees. We have long been concerned about the quality and reliability of financial information on credit programs. Our audits, as well as those by the IGs, have consistently disclosed serious weaknesses in agency systems that account for and control receivables, and three of the lending programs—(1) farm loans, (2) student financial aid, and (3) housing guarantees—are on our high-risk list. Agency managers need accurate and reliable information on a day-to-day basis to effectively manage multibillion dollar loan and loan guarantee portfolios and to determine the value and collectibility of debts owed the government. For example, audits have disclosed weaknesses in agency approaches to estimating losses on these loans and, in some cases, have resulted in significant adjustments to the recorded loss reserves. In response to problems identified in the Federal Housing Administration’s (FHA) fiscal year 1991 financial statement audit and to prepare for the fiscal year 1992 audit, FHA’s management initiated a special study to better estimate loan loss reserves. As a result, in fiscal year 1992, FHA’s loan loss reserves for the multifamily General Insurance (GI) and the Special Risk Insurance (SRI) funds increased by $6.4 billion. The GI reserve increased from $5.8 billion to $10.6 billion and the SRI reserve increased from $156 million to almost $1.9 billion. Financial audits of the Federal Family Education Loan Program identified that Education’s estimates of the cost to the government of loan guarantees, estimated at $15.2 billion as of September 30, 1994, were derived using unreliable data. Education is now working more closely with the guaranty agencies to understand and resolve some of the student loan data errors. As a result of these and other on-going financial audits, there now exists a clearer picture of the government’s performance and loss estimates for lending programs. The loss estimates will become more accurate as agencies gain experience in implementing the Credit Reform Act of 1990 and the related accounting standard for direct loans and loan guarantees developed by the Federal Accounting Standards Advisory Board (FASAB). These efforts and the ongoing audit process should result in appropriate systems and methodologies being implemented to provide critical program cost and budget information. The expansion of the CFO Act’s financial statement preparation and audit requirement will bring a significant amount of the federal budget under examination for the first time. For example, the first full audit of almost $300 billion of Medicare and Medicaid expenditures, or about 19 percent of the federal government’s expenditures, will be performed. This will be especially important, given the role of Medicare and Medicaid spending in driving the growth of federal expenditures in the foreseeable future. Moreover, some health care experts have estimated that as much as 10 percent of national health care spending is lost to waste, fraud, and abuse. Also, we and others have reported many prior problems with these programs, and limited financial audits to date have shown a lack of detailed supporting records. For example, the Health Care Financing Administration’s fiscal year 1994 balance sheet audit disclosed inadequate or no documentation supporting over $100 million of Medicare receivables under contractor supervision, making collectibility questionable. A full financial audit of these expenditures will provide a much better understanding of the reliability of reported Medicare and Medicaid payments, control weaknesses that permit waste, fraud, and abuse to occur and needed corrective actions, and the impact of noted problems on program operations. Another significant area to be audited is the federal government’s substantial environmental cleanup costs relating to federal facilities that were contaminated with nuclear materials or other hazardous substances. OMB estimated in October 1995 that the federal government’s known environmental cleanup costs could range from $200 billion to $400 billion in the years ahead. The agencies included in this estimate are the Departments of Energy, Defense, Interior, and Agriculture and NASA. The full magnitude of the government’s environmental cleanup liability is unknown. For example, $200 billion to $350 billion of the above amount was estimated for Energy alone; however, Energy’s estimate excludes certain costs, such as costs related to those items for which technological solutions do not currently exist, such as most groundwater contamination. The agencywide audits conducted under the expanded CFO Act requirements will provide an indication of the reasonableness of current agency estimates. In addition, financial statement disclosures will provide information on the nature, location, and magnitude of the federal government’s overall exposure for environmental cleanup. In addition to these major investments, there are other key federal investments that will come under scrutiny as well. We are concerned, however, that scrutiny for some of these investments may not occur soon enough because a few agencies may slip in meeting the CFO Act’s time schedule. For example: The Federal Emergency Management Agency (FEMA), which in fiscal year 1992 through fiscal year 1994 made $7 billion in relief payments, will not be ready to have its Disaster Relief Fund’s financial records and reports audited within the next year. The fund’s accounting records contain inaccurate data that have never been reconciled to supporting records, including unliquidated obligations of over $4 billion for disasters that date back to FEMA’s inception in 1979. To prepare for the audit, FEMA has, with contractor help, begun the necessary reconciliation. FEMA has stated that it plans to have agencywide audited financial statements beginning with fiscal year 1998. The Department of Transportation (DOT) had over $47 billion in fiscal year 1994 gross budget authority and is accountable for important aspects of ensuring the development and safety of the nation’s highways, railroads, and airways, including those administered by the Federal Highway Administration, the Federal Aviation Administration, and the Coast Guard. DOT has not yet prepared agencywide financial statements and does not plan to do so for fiscal year 1995. Based on DOT’s progress to date, without additional impetus, it is uncertain as to whether the Department will be ready to prepare reliable consolidated agencywide financial statements within the statutory time frame. Under the requirements of the CFO Act, the Department of Justice (DOJ), which does not have many trust or revolving funds or commercial functions and was not part of the pilot program, was not required to audit many of its significant operations. Of its $13.5 billion in gross budget authority, only 12 percent, or $1.6 billion was subjected to audit. DOJ’s major bureaus, such as the Federal Bureau of Investigation, Drug Enforcement Administration, Immigration and Naturalization Service, U.S. Attorneys Office, and Marshals Service have not been audited, nor have financial statements been prepared for these entities. DOJ is the only department that has requested a waiver from the preparation and audit of departmentwide financial statements for fiscal year 1996 under the expanded CFO Act requirements. The Department has cited as the basis for its request the lack of experienced staff to prepare financial statements and the lack of funds to contract for the audits. We believe DOJ needs to make a commitment to the audited financial statement requirements and view this as a priority because of both technical and cultural challenges that must be overcome. Financial audits are also continuing to find and propose corrective actions to resolve long-standing material internal control weaknesses at the agencies under audit. These audits also continued to provide a much needed discipline in pinpointing operational inefficiencies and weaknesses, highlighting gaps in effectively safeguarding the government’s assets, and preventing possible illegal acts. Financial audits, for instance, identified information security weaknesses that increased the risk that sensitive and critical computerized data and computer programs will be inappropriately modified, disclosed, or destroyed. For example: IRS continued to lack sufficient safeguards to prevent or detect unauthorized browsing of confidential taxpayer records; student loan data maintained by Education could have been modified for fraudulent purposes because users had the ability to override controls designed to prevent such actions; FHA had continuing weaknesses in systems, including those that process sensitive cash receipt and disbursement transactions; at the Customs Service, thousands of users had inappropriate access to critically sensitive programs and data files; and the Navy had significant weaknesses involving access to financial data and the adequacy of computer center plans for recovery if service is interrupted. Further, financial statement audits have continued to identify potential and actual dollar savings. These savings include the recovery of millions of dollars in overpayments to DOD contractors, the collection of receivables, the recoupment of payments incorrectly made to government intermediaries and employees, and reductions in the cost of operations that are excessive. Further, financial audits are disclosing areas where the government may be paying more than it should or may not be collecting all that it should. For example: Education did not have systems or procedures in place to ensure that individual billing reports submitted by guaranty agencies and lenders were reasonable. For fiscal year 1994, these billings paid were estimated to be $2.5 billion. The Coast Guard could not provide detailed supporting records for almost $100 million of accounts receivable reported for the Oil Spill Liability Trust Fund and the associated $65 million estimate for uncollectible accounts. Financial audits have also shown that agencies often do not follow rudimentary bookkeeping practices, such as reconciling their accounting records with Treasury accounts or their own subsidiary ledgers. These audits have identified hundreds of billions of dollars of accounting errors—mistakes and omissions that can render information provided to managers and the Congress virtually useless. This situation could be much improved if more rigor were applied in following existing policies and procedures. Beginning with those for fiscal year 1997, Treasury will prepare financial statements for the executive branch as a whole, and we will audit these statements. For the first time, the American public will have an annual report card on the results of current operations and the financial condition of its national government. I am most pleased that this requirement has finally become a reality. My hope is that the requirement for audited financial statements would be extended to the legislative and judicial branches so that these could be included in audited governmentwide consolidated financial reports to the American taxpayers. I am also pleased that the Federal Reserve has contracted for financial audits over the next 5 years. My hope is that other independent agencies of the government would do likewise. As the consolidated executive branch statements evolve and when the quality of the underlying data can withstand the scrutiny of an independent audit, they will not only be useful for decisionmakers but will help engender public confidence that the federal government can be an effective financial steward, fully accountable for the use of tax dollars. These statements should provide a clear picture of the financial demands and commitments of the federal government, the available resources, the execution of the budget, and the results, both financial and performance, of current operations. We are working closely with OMB, Treasury, the agency CFOs, and the IGs. We have formed a series of task forces to address accounting and auditing issues and are actively supporting the work of FASAB. This is a tremendous undertaking and will require all parties to work together. For our part, we are going to focus on performing the IRS financial statement audit for the fourth year and conducting the first-ever financial statement audit for the Bureau of Public Debt, which accounts for more than $3.4 trillion of federal debt held by the public and the related annual interest payments; undertake selective work at selected major agencies involving, for example, SSA’s 75-year actuarial projections, DOD’s mission assets (valued at over $1 trillion), the almost $200 billion Medicare program, and the almost $100 billion Medicaid program, and at these agencies, we will coordinate our efforts with the IGs; and work cooperatively with the IGs at the 24 CFO Act agencies as they audit other major key accounts. This will be a major challenge. We are very much depending on the 24 CFO Act agency IGs to do their individual audits, and are concerned about the extent to which budget constraints may affect their ability to perform those audits properly and timely. I am also concerned, that GAO’s downsizing has left us short of the accounting and financial systems expertise needed in 1997 to conduct the consolidated executive branch financial statement audit. Even though I have reassigned personnel within GAO to the maximum extent possible, we are still short about 100 to 150 people who possess the technical skills we need to do the job. I expect this problem to be even further exacerbated as we experience additional attrition in these areas throughout 1996. We plan to consult with the Congress about this problem in the context of our fiscal year 1997 budget submission. The leadership envisioned by the CFO Act is beginning to take root. In general, we have found that OMB’s Deputy Director for Management and Controller and the agency CFOs and Deputy CFOs meet the qualifications outlined by the CFO Act. Also, the CFOs are active in their agencies and as a group through the CFO Council, which the act created, to provide the leadership foundation necessary to effectively carry out their responsibilities. CFO Act agencies, however, need to ensure that CFOs possess all the necessary authorities within their agencies to achieve change. For instance, because of the interdependency of the budget and accounting functions, many agencies have included both budget formulation and execution functions under the CFO’s authority. However, at a few agencies, such as the Department of Agriculture, HUD, and the Agency for International Development, CFOs do not have a full range of budget responsibilities. HUD’s CFO, for instance, maintains records of, and provides HUD’s budget office with, information on obligations and unexpended balances but is not involved in formulating the budget or allocating and reallocating funds throughout the year. At Education and Labor, CFOs have responsibility for budget execution but not for budget formulation. We believe that each CFO Act agency should recognize that both these functions can best be integrated with the agency’s other financial activities by delegating responsibility for them to the CFO. Also, at many CFO Act agencies, financial management responsibility rests with the CFO but is carried out by the financial leaders at the agencies’ components, which can create problems. For instance, we recently reported that the Department of Agriculture’s CFO has neither the authority within the Department nor the mechanism to enforce compliance with its financial standards. To overcome this kind of situation, we believe it is important for CFOs to have a strong role in and authority over component financial management matters. Additionally, some CFOs have responsibility for operational functions, such as procurement and grants management, in addition to those directly related to agency financial management. While functions such as these can provide opportunities for much needed integration of different functional areas, they also have the potential to distract the CFOs from concentrating on financial management issues throughout the agencies. Another serious problem the CFOs face in building an effective supporting structure is attracting and retaining well qualified financial management personnel and working to upgrade staff skills in a constrained budget environment. Financial audits have shown with greater clarity the extent and nature of the government’s financial management personnel shortages and the importance of overcoming them. These audits have consistently disclosed agencies having extraordinary financial management problems in even the fundamental areas of making reconciliations, documenting adjustments, ensuring that inventories are taken, and making supervisory reviews of accounts and transactions. Weaknesses such as these lead us to believe that fundamental skill levels and training issues must be addressed quickly. Moreover, implementing the CFO Act’s objective of upgrading financial operations, such as developing performance measurement systems and integrating budget and accounting data, will require significantly enhanced staff skills. Focusing on these areas is difficult when agencies’ basic financial and control weaknesses remain unchecked. Top managers are, however, beginning to get a sense of the extraordinary effort that will be needed to upgrade financial management organizations and to fix known problems. In this regard, OMB’s July 1995 Federal Financial Management Status Report and Five-Year Plan addresses the need to develop a quality financial management workforce by implementing methods to assist agencies in recruiting and retaining qualified financial management personnel. CFOs, though, have a significant challenge in building effective organizations to meet the CFO Act’s challenges. To help in this area, in June 1992, the Association of Government Accountants made 30 recommendations covering all facets of the financial personnel challenge, from recruiting talented staff to reducing turnover. The CFO Council’s Human Resources Committee is working to implement these strategies through such activities as coordinating efforts to provide low-cost, effective financial management training and developing a plan for establishing core competencies and standards for all CFO-related positions. Investments must be made in training to ensure that financial management personnel increase their professional skills to keep pace with emerging technology and developments in financial management. However, financial management training is often a neglected aspect of ensuring high-quality financial operations. In our discussions with the 24 CFO Act agencies, most said they had not established formal training programs to enhance the skills and knowledge of financial management staff. However, some agencies have acted. The Department of Energy, for example, has established a training program for financial managers that all of its CFO offices are required to implement and that is based on employees’ individual development plans. Also, the Department of Education requires its financial personnel to complete 40 hours of continuing professional education annually. We have called for financial management personnel to be required to participate in a minimum amount of continuing professional education.Government auditors are required to attend 80 hours of continuing professional education every 2 years, and this requirement has helped enhance audit quality and professionalism. We believe, though, that upgrading and training financial management staff requires much greater short-term attention to identify more specifically the extent of the skills gap and how it can be most effectively narrowed or closed. We plan to study this area in more depth in the coming months and will report the results to the Committee. In this regard, the Committee can be of assistance by challenging the CFOs to clearly identify financial management skill shortages in terms of personnel needs to effectively achieve the CFO Act’s financial management objectives. Further, the Committee can encourage agencies to get the resources and financial management talent needed to make the needed improvements. Seriously inadequate financial management systems are currently the greatest barrier to timely and meaningful financial reporting. Agency systems are old and do not meet users’ needs. In March 1995, OMB reported that 39 percent of agency systems were originally implemented over 10 years ago; 53 percent need to be replaced or upgraded within the next 5 years. The CFO Council has designated financial management systems as its number one priority. The need for this emphasis is underscored by the results of self-assessments by the 24 CFO Act agencies, which showed that most agency systems are not capable of readily producing annual financial statements and are not in compliance with current system standards. Equally as important, as a result, managers do not have reliable, timely financial data throughout the year to help manage effectively. The poor condition of agency financial systems is a symptom of a much broader issue—the federal government’s overall inability to effectively manage investments in information technology (IT). Many projects have been poorly managed, cost much more than anticipated, and have not provided intended benefits. There is a growing recognition that fundamental information technology management problems need to be addressed, and a number of initiatives are underway to do this. For example, our May 1994 executive guide on the best information management practices of leading organizations has been enthusiastically received, and several agencies are actively attempting to implement its tenets. We testified before this Committee on the key practices outlined in this guide. Also, we have developed several tools to assist agencies in taking a strategic view of their information resource management practices and maximizing their IT investments. Our Strategic Information Management (SIM) Self-Assessment Toolkit, for example, has been used by several agencies, including IRS and HUD, and has already resulted in several million dollars in savings. In August 1995, we issued an exposure draft of our Business Process Reengineering Assessment Guide, which is currently being pilot tested at several agencies. Additionally, we have worked with OMB in finalizing Evaluating Information Technology Investments: A Practical Guide, which will provide agency managers a systematic and objective means of assessing the risk and maximizing the return associated with planned IT investments. Further, the Congress is taking steps to improve federal IT management. Earlier this year, the Congress amended the Paperwork Reduction Act, which the President signed into law on May 22, 1995. The amendments should improve the management of IT resources and institute stronger controls over investments. Other legislative proposals to strengthen leadership and accountability are being considered, including establishing Chief Information Officers and changing system planning and acquisition practices. There are also improvement efforts underway specifically aimed at financial systems. For example, in January 1995, the Joint Financial Management Improvement Program (JFMIP) published a model for establishing and maintaining integrated financial management systems. This document, entitled Framework for Federal Financial Management Systems, is an important step in providing needed guidance. Additionally, OMB’s July 1995 Federal Financial Management Status Report and Five-Year Plan sets out broad objectives, tasks, and milestones to help improve systems. The plan, for example, addresses making better use of off-the-shelf technology, cross servicing, and outsourcing. Overall, OMB’s objectives have provided the right emphasis and priority for financial systems improvements. OMB and the CFO Act agencies must now focus on specific implementing policies and strategies. To help these efforts, we are preparing a methodology for reviewing financial management systems. This methodology also could provide a starting point to help agencies develop systems requirements for building integrated information systems to support their missions, operations, and governmentwide reporting requirements. We plan to work with OMB and the CFO Council to move in this direction and will report the results to the Committee next spring. Also, since the benefits of long-term efforts to improve agency systems often require years to realize, agencies need to make their existing systems work better in the interim. An important aspect of this is to ensure the validity of existing data and implement the routine controls needed to keep these data reliable, such as reconciliations to identify and resolve discrepancies. Such efforts will improve data reliability and help ensure that information transferred to new systems is accurate. One of the CFO Act’s primary goals is to enhance the reporting of reliable financial and performance data that are useful and understandable to program managers and congressional decisionmakers. Prior to its enactment, despite good intentions and past efforts to improve financial management systems, the government was not using timely, reliable, and comprehensive financial information when making decisions having a tremendous impact on the American public. The first important step was taken with the CFO Act requirement for the preparation and audit of financial reports to achieve basic data reliability. Now, at least we will know when data are reliable and when they are not. The next steps, which build on the foundation laid by the CFO Act, will further enhance the usefulness of accountability reporting to decisionmakers by integrating performance measures into the reports and developing reports more specifically tailored to the government’s needs. They include the efforts of the Federal Accounting Standards Advisory Board (FASAB) to develop accounting standards and OMB’s efforts to implement the Government Performance and Results Act (GPRA) and to develop streamlined Accountability Reports. As you may know, FASAB was established in October 1990 by the Secretary of the Treasury, the Director of OMB, and myself to consider and recommend accounting principles for the federal government. The nine-member Board is comprised of representatives from the three principals, the Congressional Budget Office, the Department of Defense, one civilian agency (presently from Energy), and three representatives from the private sector, including the Chairman, former Comptroller General Elmer B. Staats. FASAB publishes recommended accounting standards after considering the financial and budgetary information needs of the Congress, executive agencies, other users of federal financial information and comments from the public. OMB, Treasury and GAO then decide whether to adopt the recommended standards; if they do, the standard is published by GAO and OMB and becomes effective. Early next year, FASAB will complete the federal government’s first set of comprehensive accounting standards developed under this consensus approach, which has worked well. While the development of accounting standards as envisioned by FASAB and its three principals is very important to strengthening accountability, the benefits will come from their full implementation. It is our understanding that Senator Brown plans to introduce legislation that would establish in law the FASAB process, which at this time, is operating under a memorandum of understanding. Among the purposes cited in the legislation is to provide for uniform adoption and application of accounting standards across government and the establishment of systems that meet the requirements of the CFO Act. The legislation being considered calls for each federal agency to give priority to funding and provide sufficient resources to implement the act. Further, the proposed legislation would require an agency’s CFO Act auditor to report whether the agency’s financial management system complies substantially with the FASAB accounting standards and other financial management system requirements. We understand that Senator Brown’s proposal will also include mechanisms to highlight an agency’s compliance problem to the Congress and to work with OMB on remedial actions to bring the agency’s financial management systems into compliance. We support the goals of Senator Brown’s proposal, which make permanent the work of FASAB and add additional emphasis on implementing the accounting standards. We will be glad to work with the Committee as it considers this proposal. Key to the FASAB approach was extensive consultation with users of financial statements early in their deliberations to ensure that the standards will result in statements that are relevant to both the budget allocation process as well as agencies’ accountability for resources. Users were interested in getting answers to questions on such topics as: Budgetary integrity: What legal authority was provided to finance government activities and was it used correctly? Operating performance: How much do programs cost and how were they financed? What was achieved? What are the government’s assets and are they well managed? What are its liabilities and how will they be paid for? Stewardship: Has the government’s overall financial capacity to satisfy current and future needs and costs improved or deteriorated? What are its future commitments and are they being provided for? How will the government’s programs affect the future growth potential of the economy? Systems and control: Does the government have sufficient controls over its programs so that it can detect and correct problems? Standards and reports addressing these objectives are being phased in over time. Since the enactment of the CFO Act, OMB’s guidance on the form and content of financial statements has stressed the use of narrative “Overview” sections preceding the basic financial statements as the best way for agencies to relate mission goals and program performance measures to financial resources. Each financial statement includes an Overview describing the agency, its mission, activities, accomplishments, and overall financial results and condition. The Overview also should discuss what, if anything, needs to be done to improve either program or financial performance, including an identification of programs or activities that may need significant future funding. Agencies are beginning to produce reports that do this. For example, SSA’s fiscal year 1994 financial statement Overview presented a number of performance measures dealing with the adequacy of the trust fund, service satisfaction, promptness in issuing earnings statements and processing claims, and the adequacy of employee training. Linking the costs of achieving these performance levels is the next challenge. In this regard, FASAB’s cost accounting standards—the first set of standards to account for costs of federal government programs—will require agencies to develop measures of the full costs of carrying out a mission or producing products or services. Thus, decisionmakers would have information on the costs of all resources used and the cost of support services provided by others to support activities or programs—and could compare these costs to various program performance. GPRA sets forth the major steps federal agencies need to take towards a results-oriented management approach. They are to (1) develop a strategic plan, (2) establish performance measures to monitor progress in meeting strategic goals, and (3) link performance information to resource requirements through the budget. GPRA requires up to five performance budgeting pilots for fiscal years 1998 and 1999. OMB will report the results of these pilots in 2001 and recommend whether performance budgets should be legislatively required. Cultural changes in federal agencies are beginning as agency pilots develop strategic plans and performance measures. OMB also has prompted progress by giving special emphasis in the fiscal year 1996 Circular A-11, Preparation and Submission of Budget Estimates, to increasing the use of information on program performance in budget justifications. Moreover, OMB Director Rivlin instructed her agency to use performance information in making budget recommendations. In preparation for the fiscal year 1997 budget cycle, OMB held performance reviews in May with agencies on performance measures and recently issued guidance on preparing and submitting strategic plans. Further progress in implementing GPRA will occur as performance measures become more widespread and agencies begin to use audited financial information in the budget process to validate and assess agency performance. OMB is also making efforts to design new financial reports based on FASAB’s recommended standards that contain performance measures and budget data to provide a much needed, additional perspective on the government’s actual performance and its long-term financial prospects. While there are a myriad of legislatively mandated reporting requirements which could be presented in separate reports, I think that decisionmakers would find that a single report relating performance measures, costs, and the budget would be most useful. This reporting approach is consistent with the CFO Council’s proposal for an Accountability Report, which OMB is pursuing. The Government Management Reform Act of 1994 authorized OMB, upon proper notification to the Congress, to consolidate and simplify statutory financial management reports. The CFO Council has proposed two annual reports, a Planning and Budgeting Report and an Accountability Report. The two consolidated reports would be used to present a comprehensive picture of an agency’s future plans and performance by addressing (1) how well the agency performed (accountability) and (2) the road map for its future actions (planning and budgeting). The consolidation of current reports into the Accountability Report would eliminate the separate requirements under various separate laws—such as GPRA, the Federal Managers’ Financial Integrity Act, the CFO Act, and the Prompt Payment Act. The Planning and Budget Report is intended to provide a comprehensive picture of an agency’s program and resource utilization plans within its strategic vision. It is supposed to link resources requested with planned actions. OMB is undertaking to have six agencies produce, on a pilot basis, Accountability Reports providing a comprehensive picture of each agency’s performance pursuant to its stated goals and objectives. We agree with the overall streamlined reporting concept and believe that, to be most useful, the Accountability Report must include an agency’s financial statements and the related audit reports. The ultimate usefulness of the Accountability Report will hinge on its specific content and the reliability of information presented. In this regard, OMB and the CFO Council will be more fully defining the information to be included in the Accountability Reports during the pilot phase. We will work with OMB and agencies throughout the pilot program. The pilot concept has worked well in the past under the CFO Act and GPRA. Of course, the ultimate goal of more reliable and relevant financial data is to promote more informed decision-making. This requires that financial data produced be understood and used by program managers and budget decisionmakers. The changes underway to financial reporting have been undertaken with a goal of making financial data more accessible to budget decisionmakers. The budget community’s involvement in the FASAB standard-setting process and OMB’s accountability proposal have contributed to this. The future challenge is to further integrate financial reports with the budget to enhance the quality and richness of the data considered in budget deliberations. As I will discuss below, improving the linkages between accounting and budgeting also call for considering certain changes in budgeting such as realigned account structures and the selective use of accrual concepts. Perhaps the chief benefit of improving this linkage will be the increased reliability of the data on which we base our management and budgetary decisions. From an agency perspective, having audited information on the value of assets and liabilities, as well as the full costs of program outputs, will permit more informed judgments in strategic planning and program priority setting. Coupled with internal control assessments, such information will also enable agencies to better target areas requiring greater management attention or reform. For example, as I discussed earlier, the IRS financial audit revealed that the accounts receivable inventory was largely uncollectible—important information that permits IRS to better target its collection resources and permits more informed appropriations decisions on the level of resources necessary to collect these funds. From a budgetary decision-making perspective, the new financial reports will improve the reliability of the budget numbers undergirding decisions. Budgeting is a forward-looking enterprise, but it can clearly benefit from better information on actual expenditures and revenue collection. Numbers from the budget will be included in basic financial statements and thus will be audited for the first time. Having these numbers audited was one of the foremost desires of budget decisionmakers consulted in FASAB’s user needs study and stems from their suspicion—well warranted I might add—that the unaudited numbers may not always be correct. For example, decisionmakers rely on data based on IRS systems on the amounts of revenue collected for each type of tax. However, as highlighted earlier, our audit revealed that the IRS’s reported revenue of $1.3 trillion for fiscal year 1994 could not be verified or reconciled to accounting records maintained for individual taxpayers in the aggregate and amounts reported for various types of taxes collected could not be substantiated. This means that the amount credited to the Social Security Trust Fund is different than the amount of social security taxes actually collected. Financial audit reports have also revealed important information on the actual costs of credit programs which can inform future budgetary decisions. Specifically, the fiscal year 1994 financial audit reports of the Farmers Home Administration, the Federal Housing Administration, the Federal Family Education Loan Program, and the Small Business Administration revealed that agencies’ estimates of the subsidy costs of their credit programs reflected in the budget are not accurate. Based on these audits, budget decisionmakers know that they have reason to question the amount of future budget requests for these programs. The new financial reports will also offer new perspectives and data on the full costs of program outputs and agency operations that is currently not reported in our cash-based budget. Information on full costs generated pursuant to the new FASAB standards would provide decisionmakers a more complete picture of actual past program costs and performance when they are considering the appropriate level of future funding. For example, the costs of providing Medicare are spread among at least three budget accounts —the Federal Hospital Insurance Trust Fund, the Federal Supplementary Medical Insurance Trust Fund, and the Program Management account. Financial reports would pull all relevant costs together. The different account structures that are used for budget and financial reporting are a continuing obstacle to using these reports together and may prevent decisionmakers from fully benefiting from the information in financial statements. Unlike financial reporting, which is striving to apply the full cost concept when reporting costs, the budget account structure is not based on a single unifying theme or concept. As we reported recently, the current budget account structure evolved over time in response to specific needs. The budget contains over 1,300 accounts, with nearly 80 percent of the government’s resources clustered in less than 5 percent of the accounts. Some accounts are organized by the type of spending (such as personnel compensation or equipment) while others are organized by programs. Accounts also vary in their coverage of cost, with some including both program and operating spending while others separate salaries and expenses from program subsidies. Or, a given account may include multiple programs and activities. When budget account structures are not aligned with the structures used in financial reporting, additional analyses or crosswalks would be needed so that the financial data could be considered in making budget decisions. If the Congress and the executive branch reexamine the budget account structure, the question of trying to achieve a better congruence between budget accounts and the accounting system structure should be considered. In addition to providing a new, full cost perspective for programs and activities, financial reporting has prompted improved ways of thinking about costs in the budget. For the most part, the budget uses the cash basis, which recognizes transactions when cash is paid or received. Financial reporting uses the accrual basis, which recognizes transactions when commitments are made, regardless of when the cash flows. Cash-based budgeting is generally the best measure to reflect the short-term economic impact of fiscal policy as well as the current borrowing needs of the federal government. And for many transactions, such as salaries, costs recorded on a cash basis do not differ appreciably from accrual. However, for a select number of programs, cash-based budgeting does not adequately reflect the future costs of the government’s commitments or provide appropriate signals on emerging problems. For these programs, accrual-based reporting may improve budgetary decision-making. The accrual approach records the full cost to the government of a decision—whether to be paid now or in the future. As a result, it prompts decisionmakers to recognize the cost consequences of commitments made today. The credit arena is a good example of how financial reporting has informed budget decision-making. Beginning in fiscal year 1992, accrual budgeting principles were applied to loans and loan guarantee programs with the implementation of credit reform. Cash treatment of these programs sent misleading signals by recording costs only when cash flowed in and out of the federal Treasury. Under this approach, loan guarantees, for example, were recorded as having no costs in the year in which program commitments were authorized, regardless of future costs flowing from this commitment. By contrast, under credit reform, the budget reflects the present value of subsidy costs to be incurred over time up front at the time when commitments are made. It may be appropriate to extend the use of accrual budgeting to other programs, such as federal insurance programs—an issue we are currently studying at the request of the Chairman, House Budget Committee. For example, the cash position of the nation’s deposit insurance system proved to be a lagging indicator of the underlying troubles faced by thrifts in the 1980s. An accrual approach, should it prove workable, would offer better information on the financial condition of various federal insurance programs. Mr. Chairman, thanks in large part to the legislative impetus of the CFO and GPRA Acts—efforts led by this Committee—decisionmakers will ultimately have available unprecedented, reliable information on both the financial condition of programs and operations as well as the performance and costs of these activities. While these initiatives carry great potential, they require continued support by the agencies and the Congress. Consequently, this Committee’s continued leadership and oversight will be important to sustain these initiatives and ensure their ultimate success. Generating new kinds of information, however valuable, can be a difficult, intensive process calling for new skills and redeployment of resources. This is a particularly challenging task in our current budgetary environment. Fiscal constraints may make it difficult for agencies to allocate sufficient resources to information gathering and analysis while facing cuts in basic services. However, such information is vital to the downsizing process itself and can help us sort out the kinds of services and operations that government should be engaged in. Finding the most effective reporting and analytical approaches will require a great deal of collaboration and communication. Appropriations, budget, and authorizing committees need to be full partners in supporting the implementation of these initiatives. This Committee could be instrumental in fostering a constructive dialogue and gaining their support, which is vital to obtaining the resources and investment needed to carry out these efforts. This type of partnership is needed to better link financial and performance data to the budget and program decision-making. The development of new information may for a time outpace the capacity of the process to fully utilize it. Just as federal accounting standards are being tailored to better address the unique needs of federal policymakers, the cost concepts used in budgeting, as well as the budget presentations themselves, may warrant reconsideration. This calls for a concerted congressional effort to rethink how the budget should be structured and presented to best take advantage of this new information. Again, the Committee could be instrumental in bringing together key congressional stakeholders to consider appropriate changes. Finally, the Committee can continue to support evolutionary refinements to reporting approaches. For example, the new financial reports can be even more useful when they are streamlined, rather than the present approach of generating separate reports. I have been stressing an approach in which performance measures and costs are reported together and linked to budget data within a single report. This approach is consistent with the CFO Council’s proposal for an Accountability Report, which we support. Mr. Chairman, in addition to strengthening financial management at the federal level this committee is also considering legislation to improve the effectiveness of accountability for federal payments to the state and local levels through the single audit process. Single audits are important accountability tools over the hundreds of billions of dollars that the federal government provides to state and local governments and nonprofit organizations. In June 1994, we reported to the Committee on the Single Audit Act’s important role. It has helped institutionalize fundamental elements of good financial management in state and local governments, such as preparing financial statements in accordance with generally accepted accounting principles, obtaining annual independent comprehensive audits, assessing internal controls and compliance with laws and regulations, monitoring subrecipients, tracking federal funds, and resolving audit findings. In addition, the single audit process is an effective way of promoting accountability over federal assistance because it provides a structured approach to achieve audit coverage over the thousands of state and local governments and nonprofit organizations that receive federal financial assistance. Moreover, particularly in the case of block grants—where the federal financial role diminishes and management and outcomes of federal assistance programs depend heavily on the overall state or local government controls—the single audit process provides accountability by focusing the auditor on the controls affecting the integrated federal and state funding streams. Mr. Chairman, let me emphasize that block grants need not mean the absence of federal accountability provisions. Our extensive studies of the block grant experience in the 1980s led us to conclude that reasonable financial and program accountability provisions can help sustain block grants as a stable source of intergovernmental aid. Of course the definition of what is reasonable can be controversial. Overly-intrusive accountability provisions can threaten to overturn the efficiencies gained from flexible funding, while overly-limited provisions can undermine continued congressional support for the programs by depriving the Congress of information on how the funds are used and what results are achieved. Clearly, block grants call for a careful balancing of state and federal concerns. It is in this context that the Single Audit Act can play an especially helpful role in promoting financial accountability for the proper stewardship of federal funds. The act’s focus on overall state controls applied to state entities supported with federal and state funding is very consistent with the block grant approach where states are encouraged to manage federal and state funds on an integrated basis to support state priorities. It also gives state officials an annual report card on the financial management of their own entities. While strongly supporting the single audit concept, we have identified opportunities to strengthen the single audit process while at the same time reducing the burden on state and local governments and nonprofit organizations. The legislation this committee is considering to amend the Single Audit Act would strengthen the single audit process in several key areas. First, the bill would expand the Single Audit Act to include nonprofit organizations. The act currently applies only to state and local governments while nonprofit entities are administratively covered under an OMB Circular. Expanding the Single Audit Act to include nonprofit organizations establishes uniform single audit requirements for state and local governments and nonprofit organizations, which would accomplish what this committee contemplated when the act was debated. Second, the dollar threshold that establishes which nonfederal entities must have audits under the act would be raised. Raising the minimum threshold from $25,000 to $300,000 would exempt thousands of entities from federally mandated audits while still covering 95 percent of federal assistance to state and local governments. Third, programs would be selected for testing based on risk. Currently, the act requires auditors to select and test programs based solely on the amount of federal financial assistance the programs receive. Adopting a risk-based approach would increase the effectiveness of the single audit process. Fourth, the single audit reports would be more useful. Program managers we contacted did not find current reporting to be user friendly, principally because of the number of auditor’s reports. Single audit reports often include seven separate reports from the auditor. The proposed legislation would require auditors to include a summary of the results of the work. OMB adopted this approach several years ago at the federal level by including in financial statement audit reports under the CFO Act a new Overview section highlighting key results. We found that it was extremely helpful in providing insights to report users. Fifth, reducing the reporting time frame from the currently allowed 13 months to 9 months would significantly improve the timeliness of the reports. Timeliness alone does not determine the value of a report. But, the lack of timeliness can seriously degrade the value of a report. We understand that some auditors have concerns about meeting a shorter time frame. However, we believe that oversight of the hundreds of billions of federal dollars covered by the single audit process is degraded by reports that are issued more than a year after the end of the period audited. Over time, I hope that it will be the rule, rather than the exception, for the audit reports to be submitted in less than 9 months. Sixth, the legislative proposal would provide greater flexibility than the current act allows in carrying out this important oversight activity. The proposed legislation does so by providing the OMB Director authority to adjust some aspects of the single audit process to mesh with changing circumstances. For example, the OMB Director could authorize pilot projects to test alternative ways of achieving the goals of the legislation. The authorities provided the Director should not increase the burden on nonfederal entities. Rather, it is designed to make the Single Audit Act process adaptable to changing circumstances while continuing to promote sound financial management and provide effective oversight over federal resources. The 10 years of experience under the Single Audit Act has shown that the single audit process is a highly effective way to provide accountability for federal awards to state and local governments. The proposed amendments would strengthen this important accountability tool and reduce the burden on thousands of entities. We fully support their enactment. Mr. Chairman, this concludes my statement. I would be happy to now respond to any questions. Financial Management: Momentum Must Be Sustained to Achieve the Reform Goals of the Chief Financial Officers Act (GAO/T-AIMD-95-204, July 25, 1995). Financial Management: CFO Act Is Achieving Meaningful Progress (GAO/T-AIMD-94-149, June 21, 1994). Improving Government: GAO’s Views on H.R. 3400 Management Initiatives (GAO/T-AIMD/GGD-94-97, February 23, 1994). Improving Government: Actions Needed to Sustain and Enhance Management Reforms (GAO/T-OCG-94-1, January 27, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the progress being made to implement financial management reforms through the Chief Financial Officers (CFO) Act. GAO noted that improving financial management through implementation of the CFO Act requires: (1) implementing expanded requirements for financial statement audits to improve the reliability of data for decisionmaking; (2) strengthening the efficiency of revenue collection operations and controls; (3) building stronger financial management organizations by upgrading skill levels, enhancing training, and ensuring that CFO possess the necessary authority to achieve change; (4) better solutions to address problems with agencies' underlying financial systems; and (5) designing accountability reports to allow more thorough and objective assessments of agencies' performance and financial conditions and to enhance the budget preparation and deliberation process. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
HUD is the principal federal agency responsible for programs dealing with housing and community development and fair housing opportunities. Its missions reflect a broad range of statutory mandates, ranging from making housing affordable by insuring loans for multifamily projects and providing assistance on behalf of about 4.5 million lower-income tenants, to helping revitalize over 4,000 communities through community development programs, to encouraging homeownership by providing mortgage insurance to about 7 million homeowners who might not have been able to qualify for conventional loans. The diversity of HUD’s missions has resulted in a department that is intricately woven into the financial and social framework of the nation and that interacts with a diverse number of constituencies. For example, thousands of public housing authorities (PHA) and many more private housing owners are key players in administering HUD’s public housing and Section 8 rental housing programs and depend on subsidies from the Department to operate. HUD’s programs also operate through other governmental entities, such as state housing finance agencies, nonprofit groups, and state and local governments. In carrying out its missions, HUD is responsible for a significant amount of tax dollars: The discretionary budget outlays for HUD’s programs were estimated to be close to $31.8 billion in fiscal year 1995, over three-quarters of which was for public and assisted housing programs. In addition, HUD is currently one of the nation’s largest financial institutions, with significant commitments, obligations, and exposure: It has management responsibility and potential liability for more than $400 billion of mortgage insurance, an additional $485 billion in outstanding securities, and over $200 billion in prior years’ budget authority for which it has future financial commitments. In February 1995, we reported that HUD’s top management had begun to focus attention on overhauling the Department’s operations to correct its long-standing management deficiencies—an ineffective organizational structure, an insufficient mix of staff with the proper skills, weak internal controls, and inadequate information and financial management systems.The agency had formulated a new management approach and philosophy that included balancing risks with results, had begun implementing a substantial reorganization of field offices, and had initiated a number of other actions that would address the four management deficiencies. Over the past year, HUD has continued many of these efforts, but problems remain. For example, in September 1995, HUD completed its field reorganization, which eliminated 10 regional offices and transferred direct authority for staff and resources to the Assistant Secretaries. In January 1996, HUD announced additional efforts to empower the field office personnel and continue the Secretary’s efforts to implement the “community first” philosophy by streamlining headquarters and reducing headquarters’ staffing by 40 percent over 2 years. Many of the headquarters staff will be transferred to the field to enhance the agency’s efforts to be more responsive to local communities. According to the HUD Inspector General’s (IG) most recent semiannual report, while field staff endorsed the elimination of the regional management layer, they reported that communication and cooperation among the program offices had suffered badly and that the promised empowerment of field staff had not materialized. In the area of internal controls, the Department’s new management control program was fully implemented over the past year, according to HUD officials. This program is intended to tie planning with risk-abatement strategies. Under the program, managers, as they develop annual management plans, are to identify and prioritize the major risks in each of their programs and then describe how these risks will be abated. According to HUD officials, all of the program offices’ fiscal year 1996 annual management plans contained management control elements, including risk-abatement strategies. Despite improvements, internal controls continue to be a problem. On June 30, 1995, outside auditors issued a disclaimer of opinion on HUD’s fiscal year 1994 consolidated financial statements because weaknesses in internal control and “nonconformances” in systems remained uncorrected.HUD’s most serious internal control weaknesses pertain to its approximately $13 billion grant and subsidy payments to public and Indian housing authorities, including $9.5 billion of its operating subsidies and Section 8 rental assistance. The auditors noted that the existing internal controls and financial systems do not provide adequate assurance that the amounts paid under these programs are valid and correctly calculated, considering tenants’ income and contract rents. As a result, HUD lacks sufficient information to ensure that federally subsidized housing units are occupied by needy lower-income families and that those living in such units are paying the correct rents. In 1995, the Department continued to make progress toward its goals of integrating financial systems, but much remains to be done. During the year, HUD implemented its new administrative accounting system and integrated the system for Public, Indian, and Section 8 housing. In addition, all of the program offices have completed Information Strategy Plans, which identify business and information needs. Despite these efforts, as of September 1995, HUD had 88 systems in operation or under development, 60 of which are generally not in compliance with the provisions of Office of Management and Budget (OMB) Circular A-127.HUD’s financial systems continue to be identified as high-risk by OMB. The Department deserves credit for its continued emphasis on addressing its long-standing management deficiencies, including a fundamental restructuring of the agency. However, departmental restructuring is still far from being accomplished. HUD’s challenge will be to continue to sustain its focus and commitment to addressing the agency’s long-standing deficiencies while at the same time downsizing the agency, devolving authority to field offices, and providing greater program flexibility to communities. As HUD and the Congress continue to look at ways to reform the Department, they will face the challenge of finding the proper balance between local flexibility and authority and proper accountability for federal funds. Furthermore, until the Department completes its goal of integrating financial management systems, which remains years away, the lack of good information will plague the Department in many areas and limit its capacity to adequately monitor funds. Substantially restructuring programs and providing greater local flexibility to communities will in all likelihood also require new systems. While HUD has formulated approaches and initiated actions to address its department-wide deficiencies, these plans are far from reaching fruition and problems continue. In addition, we believe that until the agency and the Congress are successful in working through the proposals for a major restructuring of the agency, which include consolidating hundreds of program activities, HUD has only a limited capacity to eliminate the Department-wide deficiencies that led us to designate it as high-risk. Accordingly, we believe that both now and for the foreseeable future, the agency’s programs will be high-risk in terms of their vulnerability to waste, fraud, and abuse. As of September 30, 1995, FHA’s portfolio of insured multifamily loans consisted of 15,785 mortgages with unpaid principal balances of $47.7 billion. About $38.5 billion of the insurance supports more than 14,000 multifamily apartment properties. The remainder of the insurance supports hospitals ($4.9 billion) and nursing homes ($4.3 billion). In addition to mortgage insurance, most of the FHA-insured properties receive some form of direct assistance or subsidy, such as below-market interest rates or Section 8 project-based rental assistance. HUD also provides Section 8 project-based assistance for properties that are not insured by FHA. According to HUD’s data, the Department has 6,391 Section 8 contracts with projects not insured by FHA containing about 375,000 units receiving project-based assistance. The fundamental problems that HUD faces in overseeing the multifamily housing portfolio, which we discussed before this Subcommittee last year, continue. Specifically, for a large proportion of this housing, the government is paying more to house lower-income families than what is needed to provide them decent, affordable housing. The insured multifamily properties also expose the federal government to substantial current and future financial liabilities from default claims. A 1993 study of multifamily rental properties with HUD-insured or HUD-held mortgages found that almost one-fourth of the properties reviewed were “distressed.” Properties were considered distressed if they failed to provide sound housing and lacked the resources to correct deficiencies or if they were likely to fail financially. The reasons for these problems are varied, including design flaws in programs; the Department’s dual role as assistance provider and insurer; and long-standing deficiencies in staffing, data systems, and management controls. Program design flaws have resulted in HUD’s subsidizing rents at many properties that are far above market rents. In particular, this problem occurs under HUD’s Section 8 new construction and substantial rehabilitation programs, in which the Department paid for the initial costs of development by establishing rents above the market levels and continued to raise the rents regularly. HUD’s dual role as assistance provider and insurer has contributed to inadequate enforcement of the Department’s standards for the condition of properties and decisions by the agency to increase subsidies in order to avoid claims stemming from loan defaults. In addition, as noted in our June 1995 report on default prevention, inadequate management has resulted in poor living conditions for families with low incomes in a number of insured multifamily properties and contributed to a large number of past and anticipated defaults on FHA-insured loans. During this past year, HUD has attempted to address these problems through a legislative proposal known as “mark to market.” The proposal was applicable to about 8,500 properties that both have FHA insurance and receive Section 8 project-based assistance. According to HUD’s data, project-based assistance is provided for approximately 700,000 of the 855,000 apartment units covered. The proposal was aimed at ending the interdependence of subsidies and insurance claims, eliminating the excess Section 8 subsidy costs, and improving the physical condition of properties in poor condition—generally older properties with low rents. Under the mark-to-market proposal, Section 8 project-based assistance was to be eliminated or phased out for insured properties as the contracts expire. The proposal applied whether or not the subsidized rents were above the market levels. Residents living in units that receive project-based assistance were then to receive tenant-based assistance. Owners would set the rents at market levels, which in many cases would reduce the rental income and lead to defaults on the FHA-insured mortgages. To address this, HUD proposed reducing the projects’ mortgages if such action was needed for the properties to be able to compete in the commercial marketplace without project-based assistance. HUD’s goal was to replace the FHA-insured loans with ones not insured by FHA. Hearings were held on HUD’s mark-to-market proposal last year, but neither the House nor the Senate acted on the proposal. In the President’s fiscal year 1997 budget, HUD announced several planned revisions to its mark-to-market proposal. Most notably, the Department has indicated that the proposal will initially focus on a smaller segment of the multifamily housing portfolio—those properties with expiring contracts whose current rents are above the market levels. In addition, HUD states that localities will decide whether the housing subsidies should be tenant-based or project-based. The extent to which this proposal will reduce project-based assistance in favor of tenant-based is not clear. During this past year, HUD has also been undertaking a number of initiatives designed to strengthen its ability to manage its multifamily housing portfolio and address outstanding management deficiencies in its staffing, data systems, and management controls. As we reported in June 1995, the initiatives that HUD intended to carry out included (1) using contractors to collect more complete and current information on the physical and financial condition of insured multifamily properties and developing an “early warning system” to more quickly identify troubled properties and (2) deploying Special Workout Assistance Teams (SWAT) to help field offices deal with troubled insured multifamily properties, including the enforcement of HUD’s housing quality standards there. However, progress continues to be slow in implementing these improvements. For example, the early warning system is not yet operational nor is the initiative to contract for periodic physical inspections. The current plans are to contract for these inspections beginning in fiscal year 1997. Also, while the SWAT initiative is regarded by HUD management and HUD’s IG as effectively addressing problems, it is limited in scope and cannot be relied upon to address the Department’s problems across the portfolio. For example, resource limitations preclude expanding this effort as a standard management tool—nor does this effort address the problem of excess subsidy costs. Our recent studies of HUD’s nursing home and hospital programs also identified management deficiencies. We found that HUD does not have data that show how the programs support the Department’s mission. For example, HUD does not collect and analyze information on whom the nursing home program is serving or measure the extent to which the hospital program accomplishes the Department’s goals. In addition, our reports discuss the default risk of these multifamily programs. We found that the accumulation of more than $4 billion of insured hospital projects and the large loan amounts in New York pose risks to the future stability of the program. Furthermore, trends in health care and changes in state and federal health care policies that reduce hospitals’ revenues could threaten the solvency of insured hospitals. We also noted that the nursing home program had recently been expanded to include assisted living facilities for the elderly, which may result in the program’s growth and in potentially riskier loans, especially if HUD is unable to effectively underwrite insurance for the loans and monitor their performance. The financial situation for FHA’s single-family mortgage insurance program is very different than that for its multifamily program. The economic net worth of FHA’s single-family Mutual Mortgage Insurance Fund (Fund) continued to improve in fiscal year 1994. We estimate under our conservative baseline scenario that the Fund’s economic net worth was $6.1 billion, as of September 30, 1994. At that time, the Fund had capital resources of about $10.7 billion, which were sufficient to cover the $4.6 billion in expenses that we estimate the Fund will incur in excess of the anticipated revenues over the life of the loans outstanding at that time. The remaining $6.1 billion is the Fund’s economic net worth, or capital—an improvement of about $8.8 billion from the lowest level reached by the Fund at the end of fiscal year 1990. Legislative and other changes to FHA’s single-family mortgage insurance program have helped restore the Fund’s financial health, but favorable prevailing and forecasted economic conditions were primarily responsible for this improvement. Our estimate of the Fund’s economic net worth represents a capital reserve ratio of 2.02 percent of the Fund’s $305 billion in amortized insurance-in-force. Consequently, we estimate that the Fund surpassed the legislative target for reserves (a 2-percent capital ratio by Nov. 2000) during fiscal year 1994. One area in which the Congress could make changes that would have a positive effect on the Fund’s financial health is in HUD’s mortgage assignment program. The assignment program, created in 1959, is intended to help mortgagors who have defaulted on HUD-insured loans to avoid foreclosure and retain their homes by providing these mortgagors with financial relief by reducing or suspending their mortgage payments for up to 36 months until they can resume making payments. Our recent review of FHA’s assignment program revealed that the program operates at a high cost to the Fund and has not been very successful helping borrowers avoid foreclosure in the long run. We estimated that about 52 percent of the borrowers who entered the program since fiscal year 1989 will eventually lose their homes through foreclosure. We forecast that the remaining borrowers (48 percent) will pay off their loans following the sale or refinancing of their homes, often after remaining in the program for long periods of time. The costs incurred by HUD to achieve this result exceed the costs that would have been incurred if all assigned loans had gone immediately to foreclosure without assignment. We estimated that, for borrowers accepted into the assignment program since fiscal year 1989, FHA will incur losses of about $1.5 billion more than would be incurred in the absence of the program. While FHA borrowers’ premiums pay for these costs, not the U.S. Treasury, the program’s costs make it more difficult for the Fund to maintain financial self-sufficiency. We reported that the Congress may wish to consider alternatives to reduce the additional losses incurred by the program. The alternatives we suggested focused on making changes to the program. Legislation is now pending that would eliminate the current program and replace it with an alternative that will, according to the Congressional Budget Office (CBO), result in an estimated savings of $2.8 billion over 7 years. The nation’s 3,300 PHAs do not all have severe management problems nor do they share the same problems. Much of the public housing stock is in good condition and provides adequate housing for most of the over 3 million low-income residents. However, some PHAs we have visited are deeply troubled in many dimensions. These housing authorities’ problems include an unmet need for capital improvements, physical deterioration of the housing stock, high vacancy rates, and high concentrations of poor and unemployed people. Moreover, before 1995, HUD’s limited oversight of the most troubled housing authorities had allowed some authorities to provide substandard services to their residents for years. Some of our ongoing work deals directly with several of these interrelated problems that can lead to serious management and budget considerations for HUD. Housing authorities are caught in a very difficult position. At a time when they need larger operating subsidies to replace declining rent revenues, they also face appropriation realities brought on by the need to balance the federal budget and meet the needs of other low-income housing programs. Declining rent revenue is a direct result of targeting housing assistance to those with very low incomes. For instance, incomes of residents in public housing have dropped nearly half—from 33 percent of the area median in 1981 to about 17 percent today—thereby decreasing the availability of rental income to offset operating costs. In addition, the average vacancy rate increased from 5.8 percent in 1984 to 8 percent in 1995, further reducing the rental income available to PHAs. Making it more difficult to make ends meet, annual appropriations have not covered PHAs’ operating subsidy needs for several years. The pending fiscal year 1996 appropriations bill that was vetoed by the President would have provided only 89.7 percent of their operating needs. In a survey of 21 judgmentally selected housing authorities, we found that one of the first responses to insufficient operating funds is to reduce spending on maintenance. This compounds PHAs’ problems by perpetuating the cycle of decreased maintenance, increased vacancies, and decreased rental income. Can this cycle be broken? We believe that provisions in pending legislation, various proposals from HUD, and other programs could act together to alleviate some of the pressures on housing authorities. Both the proposed legislation and HUD’s latest transformation plan, known as “Blueprint II,” would foster admitting and retaining a higher proportion of working families and thus raising the total rental income. However, policymakers need to recognize that in some cities, this policy change could cause some people with very low incomes to wait longer to receive housing assistance. We believe that these legislative and regulatory changes will help maintain PHAs’ financial health. However, HUD and the Congress need the cooperation of the public housing authority industry. Many housing authorities have told us that the current system is too cumbersome and is detrimental to promoting their fiscal health. Like organizations in the private sector, we believe PHAs are realizing that they must take the initiative and seek out management practices that can improve performance and efficiency. We are currently finding that many PHAs are initiating innovative practices to cut costs and increase revenues. These practices include privatization, consolidation, and partnerships. We will report later in the year on the use and applicability of these practices for all PHAs. We have concluded in the past that HUD’s program for assisting troubled housing authorities should take a more active role in addressing their performance. We also reported last year that HUD had made limited use of its legal authority to declare troubled housing authorities in breach of their contracts with the Department. Moreover, the overall results of HUD’s focused technical assistance program that targeted the large, troubled authorities have been inconsistent. During the past year, 4 troubled authorities have come off the original list of 17, and 4 others have made substantial improvements in their performance scores. However, the other nine authorities—accounting for over 70 percent of all housing units managed by troubled authorities—have not shown appreciable improvement. Furthermore, the performance of four of the nine declined this past year, despite HUD’s intervention and technical assistance. HUD appears to be taking a more active role in this area. In addition to having some success with several large housing authorities, three times in the last 10 months—in Chicago, New Orleans, and San Francisco—HUD has made use of its authority to either declare an authority in breach of its contract or to take control upon the resignation of the authority’s board of commissioners. However, taking over troubled housing authorities has not come without a price. HUD’s top policymakers in public housing are simultaneously engaged in the everyday problems of managing HUD and overseeing several problem housing authorities. For example, HUD’s Acting Assistant Secretary for Public and Indian Housing functions as the New Orleans Housing Authority’s Board of Commissioners and leads HUD’s takeover team in San Francisco. Approximately 11 local and headquarters HUD staff are at the New Orleans Housing Authority, and a similar staff will be placed at the San Francisco Housing Authority. In addition, the potential for other emergency takeovers looms in the future as reduced funding puts pressure on public housing managers to do more with less. Additional takeovers will considerably strain HUD’s already-stretched management team at a time when a major reform of low-income housing may also require its attention. Last year, when we appeared before this Subcommittee, we discussed a CBO report that detailed how the number of assisted families almost doubled from 1977 through 1994, rising from about 2.4 million to about 4.7 million. According to CBO, the annual real outlays (in 1994 dollars) more than tripled during this period, rising from about $6.6 billion to about $22 billion. Difficult budget choices persist, most notably for renewing assistance under HUD’s Section 8 programs. According to HUD’s recently released plan to continue its reinvention, over the next 7 years the Department will face a significant challenge to its budget as Section 8 contracts providing affordable housing to hundreds of thousands of families expire and require renewal. HUD estimates that while outlays will remain relatively flat, the needed budget authority will balloon from $2 billion in fiscal year 1995 to $20 billion in fiscal year 2002 (assuming 1-year renewals). HUD notes that while contract renewals do not contribute significantly to the budget deficit, the demand for ever-increasing levels of budget authority cannot be met at a time of extremely tight fiscal constraints unless fundamental policy and procedural changes are made. HUD’s plan states that, to date, decisionmakers have met this challenge, in part by shortening the terms of contract renewals from 5 years in the early 1990s to 4 years in fiscal year 1994, 3 years in 1995, and now 2 years in 1996. Shorter terms substantially reduce the amount of budget authority needed to renew a Section 8 contract. However, HUD concluded that even shortening contract renewal terms to 1 year may not be sufficient to cover the budget authority needs resulting from the cascade of expiring contracts in the next half decade. HUD noted that a very real danger exists for its budget allocation to be sharply reduced because of the deep reductions in the discretionary budget caps that are now under consideration. If these reductions occur, according to HUD, the budget authority available for the Department’s other discretionary programs, such as community development block grants, programs for the homeless, and public housing, could be drastically reduced or even eliminated. We agree that these large figures present difficult choices for policymakers who must consider competing needs. These choices become even more difficult because they come at a time when, according to HUD, the “worst case” needs for housing have not been met for a record 5.3 million households. HUD’s serious management and budget problems have greatly hampered its ability to carry out its wide-ranging responsibilities. Both houses of Congress and HUD have proposed major but different reforms, including the ultimate reform—the complete dismantling of HUD. With the high stakes involved—the tens of billions of dollars that HUD spends each year, the millions of vulnerable families (including millions of households that do not receive assistance from HUD because of budget constraints), and the overwhelming needs of distressed communities—it is not unexpected that deciding the future direction of housing and community development policy and of HUD will take some time. Balancing business, budget, and social goals is a Herculean task. Legislation to reform HUD has been introduced in both houses of Congress. HUD has continued to refine its vision for a reformed agency through successive versions of its “blueprint.” What is needed now is for the Congress and the administration to agree on the future direction of housing and community development policy. This agreement should weigh the inherent trade-offs involved in understanding the magnitude of the needs of poor families and individuals, communities, first-time home buyers, and others that HUD currently serves; deciding who it is that federal housing and community development policy will serve and the priority of competing needs; deciding the mechanisms for delivering services (e.g., block grants) to meet those needs, and the federal, state, and local roles in service delivery; and determining how much to spend. Mr. Chairman, this concludes our prepared remarks. We will be pleased to respond to any questions that you and other Members of the Subcommittee may have. We in GAO look forward to working with the Congress to help address the issues before it. FHA Hospital Mortgage Insurance Program: Health Care Trends and Portfolio Concentration Could Affect Program Stability (GAO/HEHS-96-29, Feb. 27, 1996). Homeownership: Mixed Results and High Costs Raise Concerns About HUD’s Mortgage Assignment Program (GAO/RCED-96-2, Oct. 18, 1995). Multifamily Housing: Issues and Options to Consider in Revising HUD’s Low-Income Housing Preservation Program (GAO/T-RCED-96-29, Oct. 17, 1995). Housing and Urban Development: Public and Assisted Housing Reform (GAO/T-RCED-96-25, Oct. 13, 1995). Housing and Urban Development: Public and Assisted Housing Reform (GAO/T-RCED-96-22, Oct. 13, 1995). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1, 1995). HUD Management: Greater Oversight Needed of FHA’s Nursing Home Insurance Program (GAO/RCED-95-214, Aug. 25, 1995). Property Disposition: Information on HUD’s Acquisition and Disposition of Single-Family Properties (GAO/RCED-95-144FS, July 24, 1995). Housing and Urban Development: HUD’s Reinvention Blueprint Raises Budget Issues and Opportunities (GAO/T-RCED-95-196, July 13, 1995). Public Housing: Converting to Housing Certificates Raises Major Questions About Cost (GAO/RCED-95-195, June 20, 1995). Purpose of, Funding for, and Views on Certain HUD Programs (GAO/RCED-95-189R, June 20, 1995). Multifamily Housing: HUD’s Mark-to-Market Proposal (GAO/T-RCED-95-230, June 15, 1995). Multifamily Housing: HUD’s Proposal to Restructure Its Portfolio (GAO/T-RCED-95-226, June 13, 1995). Government Restructuring: Identifying Potential Duplication in Federal Missions and Approaches (GAO/T-AIMD-95-161, June 7, 1995). HUD Management: FHA’s Multifamily Loan Loss Reserves and Default Prevention Efforts (GAO/RCED/AIMD-95-100, June 5, 1995). Program Consolidation: Budgetary Implications and Other Issues (GAO/T-AIMD-95-145, May 23, 1995). Government Reorganization: Issues and Principles (GAO/T-GGD/AIMD-95-166, May 17, 1995). Multifamily Housing: Better Direction and Oversight by HUD Needed for Properties Sold With Rent Restrictions (GAO/RCED-95-72, Mar. 22, 1995). Housing and Urban Development: Reform and Reinvention Issues (GAO/T-RCED-95-129, Mar. 14, 1995). Housing and Urban Development: Reforms at HUD and Issues for Its Future (GAO/T-RCED-95-108, Feb. 22, 1995). Housing and Urban Development: Reinvention and Budget Issues (GAO/T-RCED-95-112, Feb. 22, 1995). High-Risk Series: Department of Housing and Urban Development (GAO/HR-95-11, Feb. 1995). Housing and Urban Development: Major Management and Budget Issues (GAO/T-RCED-95-86, Jan. 19, 1995, and GAO/T-RCED-95-89, Jan. 24, 1995). Reengineering Organizations: Results of a GAO Symposium (GAO/NSIAD-95-34, Dec. 13, 1994). Federally Assisted Housing: Expanding HUD’s Options for Dealing With Physically Distressed Properties (GAO/T-RCED-95-38, Oct. 6, 1994). Rural Development: Patchwork of Federal Programs Needs to Be Reappraised (GAO/RCED-94-165, July 28, 1994). Federally Assisted Housing: Condition of Some Properties Receiving Section 8 Project-Based Assistance Is Below Housing Quality Standards (GAO/T-RCED-94-273, July 26, 1994, and Video, GAO/RCED-94-01VR). Public Housing: Information on Backlogged Modernization Funds (GAO/RCED-94-217FS, July 15, 1994). Homelessness: McKinney Act Programs Provide Assistance but Are Not Designed to Be the Solution (GAO/RCED-94-37, May 31, 1994). Section 8 Rental Housing: Merging Assistance Programs Has Benefits but Raises Implementation Issues (GAO/RCED-94-85, May 27, 1994). Lead-Based Paint Poisoning: Children in Section 8 Tenant-Based Housing Are Not Adequately Protected (GAO/RCED-94-137, May 13, 1994). HUD Information Resources: Strategic Focus and Improved Management Controls Needed (GAO/AIMD-94-34, Apr. 14, 1994). Multifamily Housing: Status of HUD’s Multifamily Loan Portfolios (GAO/RCED-94-173FS, Apr. 12, 1994). Housing Finance: Expanding Capital for Affordable Multifamily Housing (GAO/RCED-94-3, Oct. 27, 1993). Government National Mortgage Association: Greater Staffing Flexibility Needed to Improve Management (GAO/RCED-93-100, June 30, 1993). Multifamily Housing: Impediments to Disposition of Properties Owned by the Department of Housing and Urban Development (GAO/T-RCED-93-37, May 12, 1993). HUD Reforms: Progress Made Since the HUD Scandals but Much Work Remains (GAO/RCED-92-46, Jan. 31, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed management and budget problems facing the Department of Housing and Urban Development (HUD). GAO noted that: (1) weak internal controls, an ineffective organizational structure, an insufficient staff skill mix, and inadequate information and financial management systems have hampered HUD ability to carry out its mission and led to GAO designating HUD as a high-risk area in January 1994; (2) despite some progress in curing management deficiencies, problems persist and, as a result, will likely continue to make HUD vulnerable to waste, fraud, and abuse; (3) HUD and Congress must try to reduce excessive housing subsidies, address the physical inadequacies of insured multifamily properties, maintain the single-family insurance fund's financial health, address public housing's social, management, and budget problems, and contain the costs of renewing housing subsidy contracts for lower-income families; (4) Congress and HUD also need to reexamine and reach consensus on housing and community development policy; and (5) HUD downsizing will likely affect its ability to limit financial exposure, carry out its mission, and correct Department-wide management and information system problems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In January 2004, the President announced a new “Vision for Space Exploration” calling for human and robotic missions to the Moon, Mars, and beyond. Over the next two decades, NASA plans to spend over $100 billion to develop a number of new capabilities, supporting technologies, and facilities that are critical to enabling space exploration missions. Development of the critical capabilities and technologies will be largely dependent on NASA contractors, who constitute more than two-thirds of NASA’s workforce. According to NASA officials, 87 percent of NASA’s $16.6 billion budget for fiscal year 2006 was spent on work performed by its contractors. Since 1990, we have designated NASA’s contract management as a high- risk area. This is based primarily on NASA’s lack of a modern integrated financial management system that can provide reliable information on contract spending and performance as well as NASA’s lack of emphasis on end results, product performance, and cost control. For example, our most recent high-risk report stated that while NASA has taken actions to improve its contract management function, it continues to face considerable challenges in implementing its contracts effectively. NASA is organized under four mission directorates—Aeronautics Research, Exploration, Science, and Space Operations—each of which covers a major area of the agency’s research and development efforts. The agency is composed of NASA headquarters, 10 field centers, and the contractor-operated Jet Propulsion Laboratory. NASA and other federal agencies can choose among numerous contract types for acquiring goods and services that can differ in part according to the nature of the fee that agencies offer to the contractor for achieving or exceeding specified objectives or goals. According to the FAR, a CPAF contract is appropriate to use when it is difficult to measure key elements of performance. It is widely used to procure nonroutine services such as the development of new systems. Typically, award-fee contracts emphasize several aspects of contractor performance, such as schedule performance, technical performance, and cost control. Because development and administration of award-fee contracts involve substantially more effort over the life of a contract than other types of contracts, the FAR and NASA’s Award Fee Contracting Guide specify that the expected benefits of using an award-fee contract must exceed the additional administrative effort and cost involved. The theory behind CPAF contracts is that although the government assumes most of the cost risk, it retains control over most or all of the contractor’s potential profit as leverage. On CPAF contracts, the award fee is often the only source of potential fee for the contractor. According to the NASA FAR Supplement and NASA’s Award Fee Contracting Guide, these contracts can include a base fee of anywhere from 0 to 3 percent of the estimated value of a nonservice contract. However, NASA’s regulations and guide do not allow the use of a base fee on service contracts. Table 1 shows the percentage of award fee available on the contracts we examined. (See app. II for a description of these contracts.) NASA relies heavily on CPAF contracts. This contract type accounted for 48 percent of obligated contract dollars and 7.7 percent of contract actions from fiscal years 2002 through 2004. By comparison, between fiscal years 1999 and 2003, award-fee contracts accounted for 13 percent of the contract dollars and 3.4 percent of contract actions at the Department of Defense (DOD). A CPAF contract includes an estimate of the total cost of what is being contracted for, may include a fee with a possible base amount fixed at the inception of the contract, and includes an award amount that is intended to motivate excellence in contract performance. The award fee is paid based upon the government’s periodic judgmental evaluations of contractor performance. When developing evaluation plans, NASA’s award-fee guide indicates that evaluation plans may include outcomes, outputs, inputs, or a combination of these elements. NASA’s guide expresses a preference for outcome factors. It notes that while it is sometimes valuable to consider input and output factors when evaluating contractor performance, outcome factors are better indicators of success relative to the desired result. An outcome factor is an assessment of the results of an activity compared to its intended purpose. Outcome-based factors are the least administratively burdensome type of performance evaluation factor, and should provide the best indicator of overall success. Outcome- based factors should therefore be the first type of evaluation factor considered for use, and are often ideal for nonroutine efforts. An output factor is the tabulation, calculation, or recording of activity or effort and can be expressed in a quantitative or qualitative manner. Output factors may be more desirable for routine efforts, but are administratively more burdensome than outcome factors due to the tabulation, calculation, or recording requirements. When output factors are used, care should be taken to ensure that there is a logical connection between the reported measures and the program’s mission, goals, and objectives. Input factors refer to intermediate processes, procedures, actions, or techniques that are key elements influencing successful contract performance. These may include testing and other engineering processes and techniques; quality assurance and maintenance procedures; subcontracting plans; purchasing department management; and inventory, work assignment, and budgetary controls. For CPAF contracts, NASA personnel conduct periodic, typically semiannual evaluations of contractor’s performance against the criteria specified in a performance evaluation plan. During the course of the evaluation period, performance monitors track contractor performance, and once the period is over they assess the performance and report to the performance evaluation board (PEB). The PEB considers the reports as well as any other pertinent information and prepares a report for the fee determination official (FDO) with findings and recommendations. The contractor is given an opportunity to provide a self-assessment of its performance during the evaluation period, which is often a written report. The FDO may meet with the PEB to discuss the report, after which a final determination is made in writing as to the amount of fee to be paid. The FDO provides the determination to the contracting officer and a copy of the related document to the contractor. When discussing award-fee contracts, it is important to acknowledge the acquisition environment in which they are used. Award fees are intended to motivate excellent contractor performance, which should result in excellent program outcomes. However, award fees should not be used to make up for factors internal or external to the acquisition environment that hinder the success of acquisition outcomes. These factors may include inadequate resources and financial management systems, lack of knowledge prior to starting the acquisition, or unsound acquisition practices. We have reported that in some cases, NASA’s failure to define requirements adequately and develop realistic cost estimates resulted in projects costing more, taking longer, and achieving less than originally planned. The persistence of these problems in NASA contract management is not only indicative of undisciplined processes or practices such as these, but may also reflect the fact that the design, development, and production of major space systems are extremely complex technical processes that must operate within complex budget and political processes. Even properly run programs can experience problems that may arise from unknowns, such as technical obstacles and changes in circumstances. Only a few things need to go wrong to cause major problems, and many things must go right for a program to be successful. The NASA FAR Supplement and NASA’s Award Fee Contracting Guide address many of the issues and problems identified by NASA on the use of award-fee contracts and provide criteria for appropriately using such contracts. Much of the guidance on award-fee contracting was issued in response to weaknesses in CPAF contracting practices identified by NASA internal reviews and NASA’s Office of Inspector General in the early 1990s. Those weaknesses included the awarding of excessive fees with limited emphasis on acquisition outcomes (end results, product performance, and cost control); rollover of unearned fee; use of base fee; and the failure to use both positive and negative incentives. NASA updated its award-fee guide in 1994, 1997, and 2001 to explain and elaborate on its award-fee policy. The 2001 revision also reflects the FAR’s additional emphasis on using performance-based contracts. NASA’s award-fee guide emphasizes tying fees to outcome factors. The guide states that outcome-based factors are the least administratively burdensome type of evaluation factor and should provide the best indicator of overall success. The award-fee guide warns against micromanaging performance and diluting the emphasis of criteria by spreading the potential award fee over a large number of performance evaluation factors. Instead, the guide recommends selecting broad performance evaluation factors, such as technical factors, project management, and cost control supplemented by a limited number of subfactors under these factors. Cost control is required to be a key performance evaluation factor in award-fee performance evaluation plans, largely because of past performance issues in which contractors were paid millions of dollars in fees on contracts that were experiencing hundreds of millions of dollars in cost overruns. The NASA FAR Supplement states that cost control shall be no less than 25 percent of the total weighted evaluation factors when explicit evaluation factor weightings are used. The NASA FAR Supplement states that emphasis on cost control should be balanced against other performance requirement objectives, and the contractor should not be incentivized to pursue cost control to the point that overall performance is significantly degraded. NASA’s regulations prohibit rolling over unearned fee to subsequent evaluation periods for service contracts. For such contracts, each interim evaluation and the last evaluation are final. Another key element of the current award-fee regulations is an increased emphasis on overall contractor performance and the end product, rather than on incremental progress. NASA requires conducting interim evaluations on end item contracts until final product delivery to monitor performance prior to contract completion and establish the basis for making interim payments. At the end of the contract, a final evaluation is conducted and the contractor’s total performance is evaluated against the award-fee plan to determine total earned award fee. For example, the contractor may be evaluated and paid an interim fee once every 6 months until the product is delivered. During the final evaluation, the contractor’s performance is evaluated to determine total earned award fee. The final evaluation may result in the contractor retaining the fee previously awarded or receiving additional or less fee than previously awarded and thus refunding a portion of the fee to the government. The final evaluation provides NASA the opportunity to make an award-fee decision based on actual quality, total cost, and ability to meet the contract schedule at the point the final product is delivered. Further, under the award-fee policy in effect prior to the 1994 and subsequent revisions to the guidance, base fee was allowed on all CPAF contracts. NASA’s current regulations prohibit the use of base fee on service contracts and restrict the use of base fee on end item contracts, such as for hardware. When base fee is used, it is not to exceed 3 percent of estimated contract costs and it should only be paid if the final award-fee evaluation is satisfactory or better. We note that base fee, which was paid on two of the three end item contracts we reviewed, did not exceed 3 percent, and none of the seven service contracts included base fee. Another issue addressed by NASA’s regulations is the use of both positive and negative performance incentives in its CPAF contracts. The NASA FAR Supplement provides that award-fee contracts with primary deliverables of hardware and with a total estimated cost and fee of greater than $25 million require both kinds of incentives based on measurements of hardware performance against objective criteria. Performance incentives are separate and distinct from award fee and measure contractor performance up to delivery and acceptance. Performance incentives are designed to reward contractors when performance of delivered hardware is above minimum contract requirements. For example, if the government establishes a specified level of objective performance for a product that the contractor exceeds, the contractor can be paid a performance incentive in addition to the award fee already paid. If the contractor just meets this measure, it cannot receive an additional performance incentive and keeps the award fee already paid. If the contractor fails to meet the measure, however, it must pay a negative performance incentive fee that reduces or eliminates the entire award fee. To address inconsistencies among NASA centers in how they evaluate contractor performance, the current award-fee regulations also provide a uniform rating system to be used for all NASA award-fee contracts. It includes adjectival ratings as well as a numerical scoring system of 0-100. Scores of 61-70 percent are considered satisfactory, and the regulations specify that contractors receiving a rating of less than 61 percent will not receive any fee. A contractor is not to be paid any base fee or award fee for less than satisfactory overall performance. NASA’s award-fee guide encourages the use of performance-based contracts for the procurement of services and supplies. The guide states that constructing performance-based contracts that clearly define performance requirements, include easily understood performance standards, and have an objective incentive mechanism will result in contractors having a clearer understanding of the government’s expectations and will ultimately facilitate enhanced contractor performance. Finally, because of the cost and administrative burden associated with administering award-fee contracts, the FAR and NASA’s award-fee guide specify consideration of the costs and benefits of using a CPAF contract before committing to this contract type. Through an evaluation of the administrative costs versus the expected benefits, the contracting officer should be able to assess whether the benefits the government gains through a CPAF contract will outweigh the additional costs of overseeing and administering the contract. The award-fee guide provides an example of how to calculate the administrative cost and states that benefits could be measured in dollars saved through cost control or enhanced technical capability. Although the revisions in NASA’s regulations and guidance on award-fee contracts address many weaknesses previously identified, the contracts that we reviewed did not always demonstrate use of award fees by the centers in the way that NASA prefers as outlined in its guidance. Some performance evaluation plans or reports included input evaluation factors, which are not the best indicators of success relative to the desired result, although they are allowed by the guidance. Other contracts included numerous subcategories for evaluating the contractor that can lessen the importance of any particular subcategory and reduce the leverage of the award fee on any particular criterion. Also, although the FAR and NASA’s award-fee guide calls for a consideration of the costs and benefits of using cost-plus-award-fee contracts because of the cost and administrative burden involved, we found no examples of a documented analysis of costs and benefits. Finally, NASA officials expressed satisfaction with the results of the contracts based on their evaluations of contractor performance against criteria established in the award-fee plan. Those evaluations would indicate generally good performance. However, that performance did not always translate into desired program outcomes. NASA paid a majority of the available award fee on all of the contracts we reviewed, including those end item contracts that did not deliver a capability within initial cost, schedule, and performance parameters. That disconnect raises questions as to the extent NASA is achieving the effectiveness it sought through the establishment of guidance on the use of award fees. Further, NASA has not evaluated the overall effectiveness of award fees in promoting program outcomes and does not have metrics in place for measuring their effectiveness in achieving program outcomes. Some performance evaluation subfactors included in performance evaluation plans or reports were not outcome oriented. NASA’s award-fee guide states that while it is sometimes valuable to consider input and output factors when evaluating contractor performance, it is NASA’s preference when feasible to tie fees to evaluation factors that are based on outcomes because outcome-based factors provide the best indicator of overall success. The award-fee guide recommends selecting broad performance evaluation factors, such as technical factors, project management, and cost control, and cautions that factors related to intermediate processes, procedures, and actions may cause the contractor to divert its attention from the overall desired outcome. The guide states that those types of factors, while allowed, are not always true indicators of the contractor’s performance and should be relied on with caution. Further, with service contracts, input factors may be of little or no value as a basis for evaluation. While the contracts we reviewed generally used outcome factors as part of the evaluation of performance, some supporting subfactors that formed the basis of the ratings for performance measured compliance with process or input factors that may not provide the best indicators of success relative to the desired results. For example, a part of the award fee on the Mechanical System Engineering Services (MSES) contract was to be awarded for program and business management performance. There were five subfactors under this primary performance factor. Two of these subfactors, program planning and organizational management and business management were input subfactors. These two input subfactors measure contractor processes or inputs, but do not focus on final results. Subfactors in the Landsat-7 contract included input subfactors such as responsiveness of the contractor’s corporate management, quality and effectiveness of the contractor’s scheduling system, and prudent utilization of manpower and timely removal of manpower upon completion of tasks. The NASA award-fee guide cautions that spreading the potential award fee over a large number of performance evaluation factors dilutes emphasis on any particular performance evaluation criterion, increases the prospect of any one item being too small and thus overlooked, and increases the administrative burden. It encourages broad performance evaluation factors such as technical factors, project management, and cost control, which should be supplemented by only a limited number of subfactors describing significant evaluation elements over which the contractor has effective management control. Our analysis showed that a large number of subfactors were used to evaluate contractor performance for some contracts. For example, the Jet Propulsion Laboratory (JPL) contract, which includes both service and product deliverables defined in task orders under the contract, uses three primary performance evaluation factors for measuring contractor performance—programmatic, scientific, and engineering; institutional management; and support to outreach initiative programs. Although the JPL performance evaluation plan characterizes award-fee subfactors as representing major areas of emphasis during the performance period, the award-fee subfactors used to support the broad performance evaluation factors were numerous—96 subfactors were used to evaluate the contractor’s performance in fiscal year 2004, and 108 subfactors were used in fiscal year 2005. The Engineering and Technical Support for Life Sciences contract used three broad performance evaluation factors also—technical performance, schedule performance and contract management, and cost control—but evaluated the contractor on numerous supporting subfactors identified as tasks or subtasks in the contractor performance evaluation reports. For example, on one task order under this contract, performance evaluation reports for various evaluation periods showed as many as 50 different subtasks used to evaluate the contractor’s performance for the primary evaluation criteria: (1) technical performance and (2) schedule performance and contract management. The Landsat-7 contract also included a number of subfactors. Contractor performance under this contract was evaluated in several different areas each time the performance evaluation board met. Technical performance and program management were grouped together in one primary performance evaluation factor, and business management and cost performance were grouped together in the other primary performance evaluation factor. There were 9 subfactors under technical performance and 12 subfactors under program management, including quality and effectiveness of the contractor’s scheduling system. Under business management and cost performance, 17 evaluation subfactors and elements were to be considered, including compliance with general contract provisions and clauses and weekly scheduling of teleconferences to determine schedule status. In addition to the number of subfactors that fell under the two primary performance evaluation factors, there were nine additional evaluation criteria, including resourcefulness, communication, and responsiveness. Although the FAR and NASA’s award-fee guide require consideration of the costs and benefits of using a CPAF contract before committing to this contract type to determine whether the benefits outweigh the additional cost and administrative burden of managing the contract, we found no instances where a documented cost-benefit analysis had been done for any of the contracts under our review. According to the guidance, since award- fee contracts require additional administrative effort, they should be used only when the contract values, performance period, and expected benefits are sufficient to warrant that additional management effort. Careful selection of the most appropriate contract type and careful tailoring should prevent a situation in which the burden of administering the award fee is out of proportion to the improvements expected in the quality of the contractor’s performance and in overall management. In addition, CPAF contracts can be particularly costly and burdensome for NASA to administer because of contract reporting and review requirements. Major cost drivers include the number of award-fee periods, performance monitors, and performance evaluation board members necessary for implementing the award-fee process. For example, according to NASA’s Award Fee Contracting Guide’s conservative estimate, it would cost about $387,000 to administer the award-fee process over the life of a 5-year contract. The guide notes that the estimate does not represent all associated administrative cost that might arise. Although NASA procurement officials acknowledged that formal cost-benefit analyses were not prepared, some officials referred to determination and findings statements or acquisition strategy meeting documents associated with specific contracts as providing some evidence of consideration given to whether or not CPAF contracts should be used. While NASA officials expressed satisfaction with the results of the contracts, in some cases there appeared to be a disconnect between the fee paid and program results. NASA paid most of the available fee on all of the contracts we reviewed—including on projects that showed cost increases, schedule delays, and technical problems. The total estimated value of the 10 contracts we reviewed was more than $31 billion. NASA paid between 80 and 99 percent of the maximum award fee possible on these contracts. The average was 90 percent, which equated to almost a billion dollars in total award fees paid under the 10 contracts. Table 2 shows the percentage of award fee paid for each of the 10 contracts we reviewed. NASA officials expressed satisfaction with contract results, which was further evidenced by its evaluations of contractor performance against criteria established in the award-fee plan. While NASA’s evaluations would indicate generally good performance, such performance did not always translate into desired program outcomes. That disconnect raises questions as to the extent NASA is achieving the effectiveness it sought through the establishment of guidance on the use of award-fees. On the end item contracts we reviewed, although there were some periods in which NASA paid a lesser percentage of the available fee, NASA ultimately paid more than 90 percent of the available fee based on its evaluation of contractor performance against criteria in the award-fee plan even when those contracts did not deliver capability within initial cost, schedule, and performance parameters. For example: The prime contractor for the International Space Station (ISS) has received 92 percent of the total award fee available—$425.3 million— although the cost increased by 131 percent, from $5.6 billion to $13 billion, in part due to increased contract scope and delays caused by the Columbia accident, but also contractor cost overruns. In addition, the contractor estimates that it will incur an additional $76 million in overruns by the time the contract is completed. Further, the completion date for space station assembly under the prime contract was delayed by 8 years. In some cases these delays were caused by actions not within the control of the contractor, such as problems with the shuttle program and actions by the international partners. The contractor for the Earth Observing System Data and Information System (EOSDIS) Core System (ECS) was paid 97 percent of the available award fee—$103.2 million—despite a delay in the completion of the contract by more than 2 years and an increase in the cost of the contract from $766 million to $1.2 billion. Technical problems, schedule delays, and cost control problems led to a major restructuring of the contract. The Landsat-7 contractor was paid 99 percent of the available award fee or more than $17 million. The original contract was managed by the Air Force but was subsequently transferred to NASA and rebaselined. The cost of the contract when transferred to NASA and rebaselined was $342.7 million. The Landsat-7 launch was delayed by 9 months and although the original scope of the work under the contract was significantly reduced, the cost of the contract increased. By the time the contract was complete, costs had risen 20 percent to $409.6 million. While some NASA officials pointed out that problems encountered on these contracts were at times outside the control of the contractor, difficulties such as these with achieving program results have resulted in NASA contract management being considered a high-risk area by GAO. We did not review these contracts to determine responsibility for undesirable results and therefore make no conclusion as to whether the fee paid was appropriate on each particular contract. However, the high fees paid on contracts where programs experienced disappointing results raise questions as to the effectiveness of award fees as a tool for obtaining desired program outcomes. For the service contracts we reviewed, NASA officials reported that they were satisfied with the results and quality of services provided. While we could not assess these contracts against cost, schedule, and performance outcomes as we could with the end item contracts, we did assess the award-fee criteria used in these contracts against NASA guidance. Here we found instances of process and input-oriented subfactors and the inclusion of numerous subfactors in evaluating performance. Further, we found no evidence that a cost-benefit analysis had been performed prior to choosing the contract type. Taken together, this is not the preferred approach according to NASA guidance, which raises questions as to the degree to which performance outcomes—getting the quality of service desired—was actually the basis for judging contractor performance and awarding fee. NASA views CPAF contracts as a viable and often preferred mechanism for acquiring the types of goods and services that the agency procures. NASA’s satisfaction with the results of these contracts is evidenced by the level of fee paid on all of the contracts we reviewed and is based on NASA’s evaluation of compliance with criteria contained in its award-fee plans. However, the agency has not evaluated the overall effectiveness of award fees in promoting desired outcomes. As noted, NASA developed its new policies on award-fee contracts because the agency and its Office of Inspector General found that it was paying excessive fees with limited emphasis on acquisition outcomes. However, according to NASA officials, the agency has not completed any assessments of the effectiveness of award fees since the award-fee policy was restructured in the 1990s, nor has it developed metrics or performance measures to conduct such evaluations. Further, NASA lacks an agencywide system with the capability of compiling and aggregating award-fee information and for identifying trends and outcomes. According to NASA officials, even NASA’s modern Integrated Enterprise Management Program (IEMP) will not provide this capability. Thus, NASA cannot meaningfully judge how well award fees are improving or can improve contractor performance and program outcomes. NASA could better link its award fees to desired results by making greater use of outcome factors, its preferred criteria for evaluating award fee contracts. While NASA has established policies and guidance that provide an appropriate framework for their use, the agency has not always used award fees as preferred by its guidance. To the extent that NASA uses input evaluation factors and numerous subfactors for evaluating performance, NASA may be diluting the leverage of award fees in achieving desired results. Our review raises questions as to the extent NASA is achieving the effectiveness it sought through the establishment of guidance on the use of award fees. However, NASA has not evaluated the overall effectiveness of its implementation of award fees. We are making the following three recommendations to increase the likelihood that the award fees NASA pays incentivize high performance from its suppliers. We recommend that the NASA Administrator reemphasize to the NASA centers the importance of tying award-fee criteria to desired outcomes and limiting the number of subfactors used in evaluations. To ensure that cost-plus-award-fee contracts are used only when their benefits outweigh the costs, we recommend that the NASA Administrator direct the centers to consider costs and benefits in choosing this contract type by requiring documentation explaining how the perceived benefits will offset the additional cost associated with its administration as required by the FAR. Finally, we recommend that the NASA Administrator require the development of metrics for measuring the effectiveness of award fees, establish a system for collecting data on the use of award-fee contracts, and regularly examine the effectiveness of award fees in achieving desired acquisition outcomes. In commenting on a draft of this report, NASA concurred with our recommendations and indicated that it would reemphasize its current guidance as recommended, address the issues raised by the report in training, and cover those issues in its internal reviews of procurement operations at the individual Space Centers. In terms of our recommendation to develop metrics for measuring the effectiveness of award fees and establish a system for collecting data on the use of award- fee contracts, NASA concurred and indicated it would explore the best way to develop and use metrics for evaluating the effectiveness of award fees and set up a system for collecting data on award-fee contracts. NASA said it planned to contact the Department of Defense to obtain information on its process, since DOD is also developing such a data collection system and metrics for measuring the effectiveness of award fees. NASA also provided technical comments on the draft, which have been incorporated as appropriate. As agreed with your office, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to interested congressional committees and the NASA Administrator. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix IV. Our objectives were to determine (1) the extent the National Aeronautics and Space Administration’s (NASA) guidance addresses the problems previously identified with the use of award-fee contracts and (2) whether NASA follows its guidance in using award fees to achieve desired outcomes. We selected 10 NASA cost-plus-award-fee (CPAF) contracts to review. Our selection was based on contract data from the Federal Procurement Data System. We extracted information on all NASA contracts active between fiscal years 2002 and 2004 that were coded as CPAF. To ensure the validity of the database from which we drew our contracts, we confirmed the contract type of each of the 10 contracts we selected through NASA contracting officers and contract documentation. The contracts we selected were the top 10 dollar value contracts active from fiscal years 2002 through 2004. These contracts account for about $7.6 billion, or 44 percent, of obligated cost-plus-award-fee-dollars for the 3-year period. To determine the extent NASA’s guidance addresses the problems previously identified with the use of award-fee contracts and whether NASA follows its guidance in using award fees to achieve desired outcomes, we interviewed responsible program and procurement officials at NASA headquarters and six NASA centers. We also reviewed the Federal Acquisition Regulation (FAR), the NASA FAR Supplement, and NASA’s Award Fee Contracting Guide. We conducted a literature review and examined previous reports, studies, and analyses done by GAO, NASA, the NASA Inspector General, or others that included information related to NASA’s use of award fees and other relevant issues. Additionally, we reviewed contract files, obtained information from program and contracting officials through the use of a structured questionnaire, and discussed the application of award-fee criteria with NASA officials involved in the award-fee process. The contract documents we reviewed contained information related to the development and implementation of the award fee. This information included the basic contract and statement of work, acquisition planning documents, award- fee modifications, performance evaluation plan documentation describing fee criteria for specific evaluation periods, contractor self-assessments, performance evaluation board reports, and fee determination documents. We used this information to corroborate and supplement the information provided by NASA officials in response to structured questionnaires we prepared and interviews we conducted. We e-mailed the questionnaires and received written responses for all 10 of the contracts. We conducted structured interviews with contracting and program officials concerning the development, implementation, and effectiveness of the award-fee structure for some of the contracts. To accomplish our work, we visited NASA headquarters in Washington, D.C. We also visited and held teleconferences with Goddard Space Flight Center in Greenbelt, Maryland, responsible for managing 3 of the contracts we reviewed; Johnson Space Center in Houston, Texas, responsible for managing 3 of the contracts; and Marshall Space Flight Center in Huntsville, Alabama, responsible for managing 1 of the contracts. We held teleconferences with officials at the Jet Propulsion Laboratory in Pasadena, California; Kennedy Space Center in Cape Canaveral, Florida; and Ames Research Center in Moffett Field, California, responsible for managing 1 contract each under our review. Our work was conducted from August 2005 through October 2006 in accordance with generally accepted government auditing standards. NAS5-60000 was an end item hardware cost-plus-award-fee contract between NASA and Hughes Applied Information Systems Incorporation. Raytheon Information Systems Company acquired Hughes in December 1999 and became the prime contractor. The contract, currently closed, was managed by Goddard Space Flight Center. The 10-year research and development contract was awarded in March 1993 for the development and operation of the Earth Observing System Data and Information System Core System. The period of performance on the contract actually ended in April 2005, and the contract has since been closed. According to Goddard Space Flight Center procurement officials, the desired program outcome or objective of the contract was to develop a technically capable system to process data from NASA’s satellites at a reasonable cost. Procurement officials stated that the Earth Observing System Data and Information System Core System, a state-of-the-art data-processing system, is currently dedicated to the processing and dissemination of NASA Earth Science satellite data. NAS15-10000 is an end item hardware cost-plus-award-fee contract between NASA and the Boeing Company. The contract, currently active, is managed by the Johnson Space Center. A letter contract was awarded in November 1993 and was definitized in January 1995 as a cost-plus- incentive-fee award-fee contract. In October 1999, during a restructuring of the contract, the cost-plus-incentive-fee award-fee contract was converted to a cost-plus-award-fee contract. The contract was extended in December 2003, partially because of the Columbia accident. This planned 10-year contract is for the design, development, manufacture, and on-orbit assembly of the U.S. on-orbit segment of the International Space Station. The contract also included provisions for a level of effort that included (1) sustaining engineering, (2) multi-element integrated testing, (3) logistics and maintenance–post production support, (4) technical definition of contract changes, and (5) other engineering support. According to Johnson Space Center procurement officials, the desired program outcomes or objectives of this contract are (1) completion of the U. S. on- orbit segment, delivery, and on-orbit acceptance of the space station; (2) sustaining engineering of the U.S. on-orbit segment hardware and software and common hardware and software provided to international partners/participants and payloads; (3) post-production support of the U.S. on-orbit segment hardware and common hardware provided to the international partners/participants; and (4) space station end-to-end subsystems management for the majority of the subsystems and specialty engineering disciplines. NAS5-32633 was an end item hardware cost-plus-award-fee contract between NASA and Lockheed Martin Missiles and Space. The contract, currently closed, was managed by Goddard Space Flight Center. The research and development contract was initially awarded by the Air Force in October 1992 and transferred to NASA in May 1994. The contract was for the design, development, fabrication, integration, test, and pre- and post-launch support of the Landsat-7 spacecraft. Landsat-7 was launched in April 1999; the contract was completed in 2005. The purpose of the Landsat-7 satellite is to obtain continuous remotely sensed, high-resolution imagery of the earth’s surface for environmental monitoring, disaster assessment, land use and regional planning, cartography, range management, and oil and mineral exploration. According to Goddard Space Flight Center procurement officials, the desired program outcome or objective of the contract was to develop an operational satellite that met the science requirements of users and the laws requiring the data be obtained at a reasonable cost. NAS8-60000 was a cost-plus-award-fee service contract between NASA and Computer Sciences Corporation. The contract, managed by the Marshall Space Flight Center, was in the process of being closed as of June 2006. It was awarded in May 1994, and covered a 2-year period of performance, but included options to extend the period of performance for an additional 6 years—through April 30, 2002. The contract was extended three times, with the period of performance ending on March 30, 2004. The primary purpose of the contract was to provide services in the area of program information system mission services. The contractor’s responsibilities were to manage, be responsible for, and provide information services to meet requirements of the Information Systems Services Office and its customers. According to Marshall Space Flight Center procurement officials, the desired program outcome or objective of the contract was to provide services including operating and maintaining existing equipment and software; gathering, analyzing, defining, and documenting systems requirements; and planning, designing, developing, acquiring, integrating, testing, and implementing new systems or enhancements to existing systems. NAS2-14263 was a cost-plus-award-fee service contract between NASA and Lockheed Martin Engineering and Science Company, defined under task orders. The contract, managed by Ames Research Center, was in the process of being closed as of June 2006. Its period of performance ended in September 2003. The 5-year research and development contract was awarded in May 1995 for the provision of engineering and technical support services for Ames Research Center life sciences. The work to be performed included engineering and technical support for life sciences projects, including space shuttle life sciences payloads, other life science payloads, the Space Station Biological Research Project, ground-based life sciences research, and advanced life support technology development. According to Ames Research Center procurement officials, the desired program outcome or objective of the contract was to achieve support for space life science projects, life sciences research, and related technology. NAS9-19100 was a cost-plus-award-fee service contract between NASA and Lockheed Martin with indefinite delivery, indefinite quantity task orders; performance-based; and level-of-effort provisions. Following the merger of Lockheed and Martin in 1995, NASA consolidated two existing contracts to form NAS9-19100 with an effective date of October 1, 1996. The contract, managed by Johnson Space Center, was in the process of being closed as of June 2006. The period of performance ended in January 2005. The contract included requirements related to hardware, government- furnished crew equipment, facilities, laboratory maintenance, life sciences, flight hardware, and support for the science and engineering requirements of the Space Shuttle Program and the International Space Station Program. According to Johnson Space Center procurement officials, the desired program outcomes or objectives of the contract were to provide engineering and science support to all engineering directorates at Johnson Space Center as well as support both the science and engineering requirements of the shuttle and space station programs. NAS9-98100 was a cost-plus-award-fee service contract between NASA and Lockheed Martin Space Operations Company, with task orders and level- of-effort provisions. The contract, which was in the process of being closed as of June 2006, was managed by the Johnson Space Center. It was awarded on September 25, 1998, with a basic 5-year period of performance and an option for an additional 5-year period. NASA chose not to exercise the option for the second 5-year period of performance. The contract required (1) developing an integrated operations approach to spacecraft design, operations, and data processing that minimized cost and the support infrastructure required to conduct space operations; (2) obtaining a highly capable and accountable contractor that would be responsible for providing space operations mission and data services; and (3) providing a contract and management structure that would enable outsourcing, commercialization, or privatization of some or all service under the contract. According to Johnson Space Center procurement officials, the desired program outcomes or objectives of the contract were to (1) provide excellent quality and reliable mission and data services at a significantly reduced cost; (2) move end-to-end mission and service responsibility and accountability to industry; (3) implement an integrated architecture that reduces overlap, eliminates unnecessary duplication, and reduces life cycle costs; (4) define streamlined processes that minimize intermediaries required to define requirements and deliver services; and (5) adopt private sector commercial practices and services. NAS10-99001 is a cost-plus-award-fee service contract between NASA and Space Gateway Support. The contract, currently active, is managed by Kennedy Space Center. The contract was awarded on October 1, 1998, for a basic 5-year period of performance and included an option for an additional 5 years. NASA exercised that option on October 1, 2003. The purpose was to provide for base operations support at NASA’s Kennedy Space Center and the Air Force’s Cape Canaveral Air Force Station, as well as specific requirements at Patrick Air Force Base and Florida Annexes into one consolidated contract. In addition to NASA and the Air Force, other primary customers include the Navy, Department of Interior, Spaceport Florida, and commercial customers such as Boeing, Lockheed Martin, Orbital Science, and Astrotech. According to Kennedy Space Center procurement officials, the desired program outcomes or objectives of the contract are to (1) enhance safety for the public and on-site workforce; (2) provide protection of human, national, and environmental resources; (3) provide high-quality and responsive service to customers; (4) reduce the cost of doing business for NASA and the Air Force; (5) provide flexibility to respond to new requirements and unplanned events; (6) improve supportability and reliability through innovative methodologies and concepts; (7) provide common support practices and systems; and (8) increase small business subcontracting goals. NAS5-01090 is a cost-plus-award-fee service contract between NASA and Swales and Associates, with a line item for indefinite delivery, indefinite quantity task orders. The contract, currently active, is managed by Goddard Space Flight Center. NAS5-01090 was awarded in January 2001 with a period of performance of 5 years and 30 days. According to Goddard Space Flight Center procurement officials, the period of performance was extended and was currently scheduled to end on August 15, 2006. The purpose of the contract is to provide engineering services for the study, design, development, fabrication, integration, testing, verification, and operations of space flight and ground system hardware and software, including development and validation of new technologies to enable future science missions. According to Goddard Space Flight Center procurement officials, the desired program outcomes or objectives of the contract were to obtain high-quality performance, desired results, and output. NAS7-03001 is a cost-plus-award-fee contract between NASA and the California Institute of Technology, a private nonprofit educational institution, which establishes the relationship for the operation of the Jet Propulsion Laboratory (JPL) federally funded research and development center. The contract, currently active, is a 5-year research and development contract awarded in November 2002 for the operation and management of JPL. The contract allows for extension or decrease to the initial period of performance in 3- or 9-month increments. JPL is a NASA- owned facility as well as an operating division of Caltech. Caltech has operated JPL as a federally funded research and development center since 1959 to meet certain government research and development needs, which, according to the contract, could not be met as effectively by existing government resources or normal contractor relationships. The contract includes both service and product deliverables, which are defined in task orders issued under the contract. The contract encompasses a large number of discrete programs and projects—approximately 500 active task orders. According to NASA procurement officials, the desired program outcomes or objectives of the contract are specific performance requirements defined in task orders issued under the contract. The contract encompasses support of exploration of the solar system, including earth-based investigations, investigations and studies to support NASA missions in the fields of earth science and astrophysics and astrobiology, as well as development of supporting fundamental technologies. In addition to the individual named above, Thomas Denomme, Assistant Director; James Beard; Shirley Johnson; Julia Kennon; Heather Barker Miller; Kenneth Patton; Sylvia Schatz; and Robert Swierczek made key contributions to this report. | Cost-plus-award-fee contracts accounted for almost half of the National Aeronautic and Space Administration's (NASA) obligated contract dollars for fiscal years 2002-2004. Since 1990, we have identified NASA's contract management as a high-risk area--in part because of a lack of emphasis on end results. Congress asked us to examine (1) the extent NASA's guidance on award fees addresses problems previously identified with the use of award-fee contracts and (2) whether NASA follows its guidance in using award fees to achieve desired outcomes. We reviewed the top 10 dollar value award-fee contracts active from fiscal years 2002 through 2004. NASA guidance on the use of cost-plus-award-fee (CPAF) contracts provides criteria to improve the effectiveness of award fees. For example, the guidance emphasizes outcome factors that are good indicators of success in achieving desired results, cautions against using numerous evaluation factors, prohibits rollover of unearned fee, and encourages evaluating the costs and benefits of such contracts before using this contract type. However, NASA does notalways follow the preferred approach laid out in its guidance. For example, some evaluation criteria contained input or process factors, such as program planning and organizational management. Moreover, some contracts included numerous supporting subfactors that may dilute emphasis on any specific criteria. Although the Federal Acquisition Regulation and NASA guidance require considering the costs and benefits of choosing a CPAF contract, NASA did not perform such analyses. In some cases there appears to be a significant disconnect between program results and fees paid. For example, NASA paid the contractor for the Earth Observing System Data and Information System Core System 97 percent of the available award fee despite a delay in the completion of the contract by over 2 years and an increase in the cost of the contract of more than 50 percent. NASA officials expressed satisfaction with the results of the contracts we reviewed, and this was further evidenced by the extent of fee paid. NASA's satisfaction was based on its evaluations of contractor performance against criteria established in the award-fee plan. While NASA's evaluations would indicate generally good contractor performance, that performance did not always translate into desired program outcomes. That disconnect raises questions as to the extent NASA is achieving the effectiveness it sought through the establishment of guidance on the use of award fees. NASA has not evaluated the overall effectiveness of award fees and does not have metrics in place for conducting such evaluations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In October 1998, the EPA Administrator announced plans to create an office with responsibility for information management, policy, and technology. This announcement came after many previous efforts by EPA to improve information management and after a long history of concerns that we, the EPA Inspector General, and others have expressed about the agency’s information management activities. Such concerns involve the accuracy and completeness of EPA’s environmental data, the fragmentation of the data across many incompatible databases, and the need for improved measures of program outcomes and environmental quality. The EPA Administrator described the new office as being responsible for improving the quality of information used within EPA and provided to the public and for developing and implementing the goals, standards, and accountability systems needed to bring about these improvements. To this end, the information office would (1) ensure that the quality of data collected and used by EPA is known and appropriate for its intended uses, (2) reduce the burden of the states and regulated industries to collect and report data, (3) fill significant data gaps, and (4) provide the public with integrated information and statistics on issues related to the environment and public health. The office would also have the authority to implement standards and policies for information resources management and be responsible for purchasing and operating information technology and systems. Under a general framework for the new office that has been approved by the EPA Administrator, EPA officials have been working for the past several months to develop recommendations for organizing existing EPA personnel and resources into the central information office. Nonetheless, EPA has not yet developed an information plan that identifies the office’s goals, objectives, and outcomes. Although agency officials acknowledge the importance of developing such a plan, they have not established any milestones for doing so. While EPA has made progress in determining the organizational structure of the office, final decisions have not been made and EPA has not yet identified the employees and the resources that will be needed. Setting up the organizational structure prior to developing an information plan runs the risk that the organization will not contain the resources or structure needed to accomplish its goals. Although EPA has articulated both a vision as well as key goals for its new information office, it has not yet developed an information plan to show how the agency intends to achieve its vision and goals. Given the many important and complex issues on information management, policy, and technology that face the new office, it will be extremely important for EPA to establish a clear set of priorities and resources needed to accomplish them. Such information is also essential for EPA to develop realistic budgetary estimates for the office. EPA has indicated that it intends to develop an information plan for the agency that will provide a better mechanism to effectively and efficiently plan its information and technology investments on a multiyear basis. This plan will be coordinated with EPA ‘s agencywide strategic plan, prepared under the Government Performance and Results Act. EPA intends for the plan to reflect the results of its initiative to improve coordination among the agency’s major activities relating to information on environment and program outcomes. It has not yet, however, developed any milestones or target dates for initiating or completing either the plan or the coordination initiative. In early December 1998, the EPA Administrator approved a broad framework for the new information office and set a goal of completing the reorganization during the summer of 1999. Under the framework approved by the EPA Administrator, the new office will have three organizational units responsible for (1) information policy and collection, (2) information technology and services, and (3) information analysis and access, respectively. In addition, three smaller units will provide support in areas such as data quality and strategic planning. A transition team of EPA staff has been tasked with developing recommendations for the new office’s mission and priorities as well as its detailed organizational and reporting structure. In developing these recommendations, the transition team has consulted with the states, regulated industries, and other stakeholders to exchange views regarding the vision, goals, priorities, and initial projects for the office. One of the transition team’s key responsibilities is to make recommendations concerning which EPA units should move into the information office and in which of the three major organizational units they should go. To date, the transition team has not finalized its recommendations on these issues or on how the new office will operate and the staff it will need. Even though EPA has not yet determined which staff will be moved to the central information office, the transition team’s director told us that it is expected that the office will have about 350 employees. She said that the staffing needs of the office will be met by moving existing employees in EPA units affected by the reorganization. The director said that, once the transition team recommends which EPA units will become part of the central office, the agency will determine which staff will be assigned to the office. She added that staffing decisions will be completed by July 1999 and the office will begin functioning sometime in August 1999. The funding needs of the new office were not specified in EPA’s fiscal year 2000 budget request to the Congress because the agency did not have sufficient information on them when the request was submitted in February 1999. The director of the transition team told us that in June 1999 the agency will identify the anticipated resources that will transfer to the new office from various parts of EPA. The agency plans to prepare the fiscal year 2000 operating plan for the office in October 1999, when EPA has a better idea of the resources needed to accomplish the responsibilities that the office will be tasked with during its first year of operation. The transition team’s director told us that decisions on budget allocations are particularly difficult to make at the present time due to the sensitive nature of notifying managers of EPA’s various components that they may lose funds and staff to the new office. Furthermore, EPA will soon need to prepare its budget for fiscal year 2001. According to EPA officials, the Office of the Chief Financial Officer will coordinate a planning strategy this spring that will lead to the fiscal year 2001 annual performance plan and proposed budget, which will be submitted to the Office of Management and Budget by September 1999. The idea of a centralized information office within EPA has been met with enthusiasm in many corners—not only by state regulators, but also by representatives of regulated industries, environmental advocacy groups, and others. Although the establishment of this office is seen as an important step in improving how EPA collects, manages, and disseminates information, the office will face many challenges, some of which have thwarted previous efforts by EPA to improve its information management activities. On the basis of our prior and ongoing work, we believe that the agency must address these challenges for the reorganization to significantly improve EPA’s information management activities. Among the most important of these challenges are (1) obtaining sufficient resources and expertise to address the complex information management issues facing the agency; (2) overcoming problems associated with EPA’s decentralized organizational structure, such as the lack of agencywide information dissemination policies; (3) balancing the demand for more data with calls from the states and regulated industries to reduce reporting burdens; and (4) working effectively with EPA’s counterparts in state government. The new organizational structure will offer EPA an opportunity to better coordinate and prioritize its information initiatives. The EPA Administrator and the senior-level officials charged with creating the new office have expressed their intentions to make fundamental improvements in how the agency uses information to carry out its mission to protect human health and the environment. They likewise recognize that the reorganization will raise a variety of complex information policy and technology issues. To address the significant challenges facing EPA, the new office will need significant resources and expertise. EPA anticipates that the new office will substantially improve the agency’s information management activities, rather than merely centralize existing efforts to address information management issues. Senior EPA officials responsible for creating the new office anticipate that the information office will need “purse strings control” over the agency’s resources for information management expenditures in order to implement its policies, data standards, procedures, and other decisions agencywide. For example, one official told us that the new office should be given veto authority over the development or modernization of data systems throughout EPA. To date, the focus of efforts to create the office has been on what the agency sees as the more pressing task of determining which organizational components and staff members should be transferred into the new office. While such decisions are clearly important, EPA also needs to determine whether its current information management resources, including staff expertise, are sufficient to enable the new office to achieve its goals. EPA will need to provide the new office with sufficient authority to overcome organizational obstacles to adopt agencywide information policies and procedures. As we reported last September, EPA has not yet developed policies and procedures to govern key aspects of its projects to disseminate information, nor has it developed standards to assess the data’s accuracy and mechanisms to determine and correct errors. Because EPA does not have agencywide polices regarding the dissemination of information, program offices have been making their own, sometimes conflicting decisions about the types of information to be released and the extent of explanations needed about how data should be interpreted. Likewise, although the agency has a quality assurance program, there is not yet a common understanding across the agency of what data quality means and how EPA and its state partners can most effectively ensure that the data used for decision-making and/or disseminated to the public is of high quality. To address such issues, EPA plans to create a Quality Board of senior managers within the new office in the summer of 1999. Although EPA acknowledges its need for agencywide policies governing information collection, management, and dissemination, it continues to operate in a decentralized fashion that heightens the difficulty of developing and implementing agencywide procedures. EPA’s offices have been given the responsibility and authority to develop and manage their own data systems for the nearly 30 years since the agency’s creation. Given this history, overcoming the potential resistance to centralized policies may be a serious challenge to the new information office. EPA and its state partners in implementing environmental programs have collected a wealth of environmental data under various statutory and regulatory authorities. However, important gaps in the data exist. For example, EPA has limited data that are based on (1) the monitoring of environmental conditions and (2) the exposures of humans to toxic pollutants. Furthermore, the human health and ecological effects of many pollutants are not well understood. EPA also needs comprehensive information on environmental conditions and their changes over time to identify problem areas that are emerging or that need additional regulatory action or other attention. In contrast to the need for more and better data is a call from states and regulated industries to reduce data management and reporting burdens. EPA has recently initiated some efforts in this regard. For example, an EPA/state information management workgroup looking into this issue has proposed an approach to assess environmental information and data reporting requirements based on the value of the information compared to the cost of collecting, managing, and reporting it. EPA has announced that in the coming months, its regional offices and the states will be exploring possibilities for reducing paperwork requirements for EPA’s programs, testing specific initiatives in consultation with EPA’s program offices, and establishing a clearinghouse of successful initiatives and pilot projects. However, overall reductions in reporting burdens have proved difficult to achieve. For example, in March 1996, we reported that while EPA was pursuing a paperwork reduction of 20 million hours, its overall paperwork burden was actually increasing because of changes in programs and other factors. The states and regulated industries have indicated that they will look to EPA’s new office to reduce the burden of reporting requirements. Although both EPA and the states have recognized the value in fostering a strong partnership concerning information management, they also recognize that this will be a challenging task both in terms of policy and technical issues. For example, the states vary significantly in terms of the data they need to manage their environmental programs, and such differences have complicated the efforts of EPA and the states to develop common standards to facilitate data sharing. The task is even more challenging given that EPA’s various information systems do not use common data standards. For example, an individual facility is not identified by the same code in different systems. Given that EPA depends on state regulatory agencies to collect much of the data it needs and to help ensure the quality of that data, EPA recognizes the need to work in a close partnership with the states on a wide variety of information management activities, including the creation of its new information office. Some partnerships have already been created. For example, EPA and the states are reviewing reporting burdens to identify areas in which the burden can be reduced or eliminated. Under another EPA initiative, the agency is working with states to create data standards so that environmental information from various EPA and state databases can be more readily shared. Representatives of state environmental agencies and the Environmental Council of the States have expressed their ideas and concerns about the role of EPA’s new information office and have frequently reminded EPA that they expect to share with EPA the responsibility for setting that office’s goals, priorities, and strategies. According to a Council official, the states have had more input to the development of the new EPA office than they typically have had in other major policy issues and the states view this change as an improvement in their relationship with EPA. Collecting and managing the data that EPA requires to manage its programs have been major long-term challenges for the agency. The EPA Administrator’s recent decision to create a central information office to make fundamental agencywide improvements in data management activities is a step in the right direction. However, creating such an organization from disparate parts of the agency is a complex process and substantially improving and integrating EPA’s information systems will be difficult and likely require several years. To fully achieve EPA’s goals will require high priority within the agency, including the long-term appropriate resources and commitment of senior management. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Environmental Protection Agency's (EPA) information management initiatives, focusing on the: (1) status of EPA's efforts to create a central office responsible for information management, policy, and technology issues; and (2) major challenges that the new office needs to address in order to achieve success in collecting, using, and disseminating environmental information. GAO noted that: (1) EPA estimates that its central information office will be operational by the end of August 1999 and will have a staff of about 350 employees; (2) the office will address a broad range of information policy and technology issues, such as improving the accuracy of EPA's data, protecting the security of information that EPA disseminates over the Internet, developing better measures to assess environmental conditions, and reducing information collection and reporting burdens; (3) EPA recognizes the importance of developing an information plan showing the goals of the new office and the means by which they will be achieved but has not yet established milestones or target dates for completing such a plan; (4) although EPA has made progress in determining the organizational structure for the new office, it has not yet finalized decisions on the office's authorities, responsibilities, and budgetary needs; (5) the agency has not performed an analysis to determine the types and the skills of employees that will be needed to carry out the office's functions; (6) EPA officials told GAO that decisions on the office's authorities, responsibilities, budget, and staff will be made before the office is established in August 1999; (7) on the basis of GAO's prior and ongoing reviews of EPA's information management problems, GAO believes that the success of the new office depends on the agency's addressing several key challenges as it develops an information plan, budget, and organizational structure for that office; and (8) most importantly, EPA needs to: (a) provide the office with the resources and the expertise necessary to solve the complex information management, policy, and technology problems facing the agency; (b) empower the office to overcome organizational challenges to adopting agencywide information policies and procedures; (c) balance the agency's need for data on health, the environment, and program outcomes with the call from the states and regulated industries to reduce their reporting burdens; and (d) work closely with its state partners to design and implement improved information management systems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Three types of Internet pharmacies selling prescription drugs directly to consumers have emerged in recent years. First, some Internet pharmacies operate much like traditional drugstores or mail-order pharmacies: they dispense drugs only after receiving prescriptions from consumers or their physicians. Other Internet pharmacies provide customers medication without a physical examination by a physician. In place of the traditional face-to-face physician/patient consultation, the consumer fills out a medical questionnaire that is reportedly evaluated by a physician affiliated with the pharmacy. If the physician approves the questionnaire, he or she authorizes the online pharmacy to send the medication to the patient. This practice tends to be largely limited to “lifestyle” prescription drugs, such as those that alleviate allergies, promote hair growth, treat impotence, or control weight. Finally, some Internet pharmacies dispense medication without a prescription. Regardless of their methods, all Web sites selling prescription drugs are governed by the same complex network of laws and regulations at both the state and federal levels that govern traditional drugstores and mail-order drug services. In the United States, prescription drugs must be prescribed and dispensed by licensed health care professionals, who can help ensure proper dosing and administration and provide important information on the drug’s use to customers. To legally dispense a prescription drug, a pharmacist licensed with the state and working in a pharmacy licensed by the state must be presented a valid prescription from a licensed health care professional. Every state requires resident pharmacists and pharmacies to be licensed. The regulation of the practice of pharmacy is rooted in state pharmacy practice acts and regulations enforced by the state boards of pharmacy, which are responsible for licensing pharmacists and pharmacies. The state boards of pharmacy also are responsible for routinely inspecting pharmacies, ensuring that pharmacists and pharmacies comply with applicable state and federal laws, and investigating and disciplining those that fail to comply. In addition, 40 states require out-of-state pharmacies—called nonresident pharmacies—that dispense prescription drugs to state residents to be licensed or registered. Some state pharmacy boards regulate Internet pharmacies according to the same standards that apply to nonresident pharmacies. State pharmacy boards’ standards may require that nonresident pharmacies do the following: maintain separate records of prescription drugs dispensed to customers in the state so that these records are readily retrievable from the records of prescription drugs dispensed to other customers; provide a toll-free telephone number for communication between customers in the state and a pharmacist at the nonresident pharmacy and affix this telephone number to each prescription drug label; provide the location, names, and titles of all principal corporate officers; provide a list of all pharmacists who are dispensing prescription drugs to customers in the state; designate a pharmacist who is responsible for all prescription drugs dispensed to customers in the state; provide a copy of the most recent inspection report issued by the home provide a copy of the most recent license issued by the home state. States also are responsible for regulating the practice of medicine. All states require that physicians practicing in the state be licensed to do so. State medical practice laws generally outline standards for the practice of medicine and delegate the responsibility of regulating physicians to state medical boards. State medical boards license physicians and grant them prescribing privileges.In addition, state medical boards investigate complaints and impose sanctions for violations of the state medical practice laws. While states have jurisdiction within their borders, the sale of prescription drugs on the Internet can occur across state lines. The sale of prescription drugs between states or as a result of importation falls under the jurisdiction of the federal government. FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported pharmaceutical products under the FDCA. Specifically, FDA establishes standards for the safety, effectiveness, and manufacture of prescription drugs that must be met before they are approved for the U.S. market. FDA can take action against (1) the importation, sale, or distribution of an adulterated, misbranded, or unapproved drug; (2) the illegal promotion of a drug; (3) the sale or dispensing of a prescription drug without a valid prescription; and (4) the sale and dispensing of counterfeit drugs. If judicial intervention is required, Justice will become involved to enforce the FDCA. Justice also enforces other consumer protection statutes for which the primary regulatory authorities are administrative agencies such as FDA and FTC. FTC has responsibility for preventing deceptive or unfair acts or practices in commerce and has authority to bring an enforcement action when an Internet pharmacy makes false or misleading claims about its products or services. Finally, Justice’s DEA regulates controlled substances, which includes issuing all permits for the importation of pharmaceutical controlled substances and registering all legitimate importers and exporters, while Customs and the Postal Service enforce statutes and regulations governing the importation and domestic mailing of drugs. The very nature of the Internet makes identifying all pharmacies operating on it difficult. As a result, the precise number of Internet pharmacies selling prescription drugs directly to consumers is unknown. We identified 190 Internet pharmacies selling prescription drugs directly to consumers, 79 of which dispense prescription drugs without a prescription or on the basis of a consumer’s having completed an online questionnaire (see table 1). Also, 185 of the identified Internet pharmacies did not disclose the states where they were licensed to dispense prescription drugs, and 37 did not provide an address or telephone number permitting the consumer to contact them if problems arose. Obtaining prescription drugs from unlicensed pharmacies without adequate physician supervision, including an examination, places consumers at risk of harmful side effects, possibly even death, from drugs that may be inappropriate for them. Estimates of the number of Internet pharmacies range from 200 to 400. However, it is difficult to determine the precise number of Internet pharmacies selling prescription drugs directly to consumers because Internet sites can be easily created and removed and some Internet pharmacies operate for a period of time at one Internet address and then close and reappear under another name. In addition, many Internet pharmacies have multiple portal sites (independent Web pages that connect to a single pharmacy). We found 95 sites that at first appeared to be discrete Internet pharmacies but were actually portal sites. As consumers click on the icons and links provided, they are brought to an Internet site that is completely different from the one they originally visited. Consumers may be unaware of these site changes unless they are paying close attention to the Internet site address bar on their browser. Some Internet pharmacies had as many as 18 portal sites. About 58 percent, or 111, of the Internet pharmacies we identified told consumers that they had to provide a prescription from their physician to purchase prescription drugs. Prescriptions may be submitted to an Internet pharmacy in various ways, including by mail or fax and through contact between the consumer’s physician or current pharmacy and the Internet pharmacy. The Internet pharmacy then verifies that a licensed physician actually has issued the prescription to the patient before it dispenses any drugs. Internet pharmacies that require a prescription from a physician generally operate similarly to traditional drugstore or mail-order pharmacies. In some instances, the Internet site is owned by or affiliated with a traditional drugstore. We identified 54 Internet pharmacies that issued prescriptions and dispensed medications on the basis of an online questionnaire. Generally, these short, easy-to-complete questionnaires asked about the consumer’s health profile, medical history, current medication use, and diagnosis. In some instances, pharmacies provided the answers necessary to obtain the prescription by placing checks next to the “correct” answers. Information on many of the Internet sites indicated that a physician reviews the questionnaire and then issues a prescription. The cost of the physician’s review ranged from $35 to $85, with most sites charging $75.Moreover, certain illegal and unethical prescribing and dispensing practices are occurring through some Internet pharmacies that focus solely on prescribing and dispensing certain “lifestyle” drugs, such as diet medications and drugs to treat impotence. We also identified 25 Internet pharmacies that dispensed prescription drugs without prescriptions. In the United States, it is illegal to sell or dispense a prescription drug without a prescription. Nevertheless, to obtain a drug from these Internet pharmacies, the consumer was asked only to complete an order form indicating the type and quantity of the drug desired and to provide credit card billing information. Twenty-one of these 25 Internet pharmacies were located outside the United States; the location of the remaining 4 could not be determined. Generally, it is illegal to import prescription drugs that are not approved by FDA and manufactured in an FDA-approved facility.Obtaining prescription drugs from foreign-based Internet pharmacies places consumers at risk from counterfeit or unapproved drugs, or drugs that were manufactured and stored under poor conditions. The Internet pharmacies that we identified varied significantly in the information that they disclosed on their Web sites. For instance, 153 of the 190 Internet pharmacies we reviewed provided a mailing address or telephone number (see table 1). The lack of adequate identifying information prevents consumers from contacting Internet pharmacies if problems should arise. More importantly, most Internet pharmacies did not disclose the states where they were licensed to dispense prescription drugs. We contacted all U.S.-based Internet pharmacies to obtain this information.We then asked pharmacy boards in the 12 states with the largest numbers of licensed Internet pharmacies (70 in all) to verify their licensure status. Sixty-four pharmacies required a prescription to dispense drugs; of these, 22, or about 34 percent, were not licensed in one or more of the states in which they had told us they were licensed and in which they dispensed drugs. Internet pharmacies that issued prescriptions on the basis of online questionnaires disclosed even less information on their Web sites. Only 1 of the 54 Internet pharmacies disclosed the name of the physician responsible for reviewing questionnaires and issuing prescriptions. We attempted to contact 45 of these Internet pharmacies to obtain their licensure status; we did not attempt to contact 9 because they were located overseas. We were unable to reach 13 because they did not provide, and we could not obtain, a mailing address or telephone number. In addition, 18 would not return repeated telephone calls, 3 were closed, and 2 refused to tell us where they were licensed. As a result, we were able to obtain licensure information for only nine Internet pharmacies affiliated with physicians that prescribe online. We found that six of the nine prescribing pharmacies were not licensed in one or more of the states in which they had told us they were licensed and in which they dispensed prescription drugs. The ability to buy prescription drugs from Internet pharmacies not licensed in the state where the customer is located and without appropriate physician supervision, including an examination, means that important safeguards related to the doctor/patient relationship and intrinsic to conventional prescribing are bypassed. We also found that only 44 Internet pharmacies (23 percent) posted a privacy statement on their Web sites. As recent studies have indicated, consumers are concerned about safeguarding their personal health information online and about potential transfers to third parties of the personal information they have given to online businesses.The majority of these pharmacies stated that the information provided by the patient would be kept confidential and would not be sold or traded to third parties. Our review of state privacy laws revealed that at least 21 states have laws protecting the privacy of pharmacy information. While the federal Health Insurance Portability and Accountability Act of 1996 called for nationwide protections for the privacy and security of electronic health information, including pharmacy data, regulations have not yet been finalized. State pharmacy and medical boards have policies created to regulate brick and mortar pharmacies and traditional doctor/patient relationships. However, the traditional regulatory and enforcement approaches used by these boards may not be adequate to protect consumers from the potentially dangerous practices of some Internet pharmacies. Nevertheless, 20 states have taken disciplinary action against Internet pharmacies and physicians that have engaged in illegal or unethical practices. Many of these states have also introduced legislation to address illegal or unethical sales practices of Internet pharmacies and physicians prescribing on the Internet. Appendix II contains details on state actions to regulate pharmacies and physicians practicing on the Internet. The advent of Internet pharmacies poses new challenges for the traditional state regulatory agencies that oversee the practices of pharmacies. While 12 pharmacy boards reported that they have taken action against Internet pharmacies for illegally dispensing prescription drugs, many said they have encountered difficulties in identifying, investigating, and taking disciplinary action against illegally operating Internet pharmacies that are located outside state borders but shipping to the state.State pharmacy board actions consisted of referrals to federal agencies, state Attorneys General, or state medical boards. Almost half of the state pharmacy boards reported that they had experienced problems with or received complaints about Internet pharmacies. Specifically, 24 state pharmacy boards told us that they had experienced problems with Internet pharmacies not complying with their state pharmacy laws. The problems most commonly cited were distributing prescription drugs without a valid license or prescription, or without establishing a valid physician/patient relationship. Moreover, 20 state boards (40 percent) reported they had received at least 78 complaints, ranging from 1 to 15 per state, on Internet pharmacy practices. Many of these complaints were about Internet pharmacies that were dispensing medications without a valid prescription or had dispensed the wrong medication. State pharmacy boards also reported that they have encountered difficulties in identifying Internet pharmacies that are located outside their borders. About 74 percent of state pharmacy boards reported having serious problems determining the physical location of an Internet pharmacy affiliated with an Internet Web site. Sixteen percent of state pharmacy boards reported some difficulty, and 10 percent reported no difficulty. Without this information, it is difficult to identify the companies and people responsible for selling prescription drugs. More importantly, state pharmacy boards have limited ability and authority to investigate and act against Internet pharmacies located outside their state but doing business in their state without a valid license. In our survey, many state pharmacy boards cited limited resources, and jurisdictional and technological limitations, as obstacles to enforcing their laws with regard to pharmacies not located in their states. Because of jurisdictional limits, states have found that their traditional investigative tools—interviews, physical or electronic surveillance, and serving subpoenas to produce documents and testimony—are not necessarily adequate to compel disclosure of information from a pharmacy or pharmacist located out of state. Similarly, the traditional enforcement mechanisms available to state pharmacy boards—disciplinary actions or sanctions against licensees—are not necessarily adequate to control a pharmacy or pharmacist located out of state.In the absence of the ability to investigate and take disciplinary action against a nonresident pharmacy, state pharmacy boards have been limited to referring unlicensed or unregistered Internet pharmacies to their counterpart boards in the states where the pharmacies are licensed. State medical boards have concerns about the growing number of Internet pharmacies that issue prescriptions on the basis of a simple online questionnaire rather than a face-to-face examination. The AMA is also concerned that prescriptions are being provided to patients without the benefit of a physical examination, which would allow evaluation of any potential underlying cause of a patient’s dysfunction or disease, as well as an assessment of the most appropriate treatment. Moreover, medical boards are receiving complaints about physicians prescribing on the Internet. Twenty of the 45 medical boards responding to our survey reported that they had received complaints about physicians prescribing on the Internet during the last year.The most frequent complaint was that the physician did not perform an examination of the patient. As a result, medical boards in eight states have taken action against physicians for Internet prescribing violations. Disciplinary actions and sanctions have ranged from monetary fines and letters of reprimand to probation and license suspension. Thirty-nine of the 45 medical boards responding to our survey concluded that a physician who issued a prescription on the basis of a review of an online questionnaire did not satisfy the standard of good medical practice required under their states’ laws. Moreover, ten states have introduced or enacted legislation regarding the sale of prescription drugs on the Internet; including five states that have introduced legislation to prohibit physicians and other practitioners from prescribing prescription drugs on the Internet without conducting an examination or having a prior physician/patient relationship. Twelve states have adopted rules or statements that clarify their positions on the use of online questionnaires for issuing prescriptions. Generally, these statements either prohibit online prescribing or state that prescribing solely on the basis of answers to a questionnaire is inappropriate and unprofessional (see app. II). As in the case of state pharmacy boards, state medical boards have limited ability and authority to investigate and act against physicians located outside of their state but prescribing on the Internet to state residents. Further, they too have had difficulty identifying these physicians. About 55 percent of state medical boards that responded to our survey told us they had difficulty determining both the identity and location of physicians prescribing drugs on the Internet, and 36 percent had difficulty determining whether the physician was licensed in another state. Since February 1999, six state Attorneys General have brought legal action against Internet pharmacies and physicians for providing prescription drugs to consumers in their states without a state license and for issuing prescriptions solely on the basis of information provided in online questionnaires. Most of the Internet pharmacies that were sued voluntarily stopped shipping prescription drugs to consumers in those states. As a result, at least 18 Internet pharmacies have stopped selling prescription drugs to residents in Illinois, Kansas, Michigan, Missouri, New Jersey, and Pennsylvania.Approximately 15 additional states are investigating Internet pharmacies for possible legal action. Investigating and prosecuting online offenders raise new challenges for law enforcement. For instance, Attorneys General also have complained that the lack of identifying information on pharmacy Web sites makes it difficult to identify the companies and people responsible for selling prescription drugs. Moreover, even if a state successfully sues an Internet pharmacy for engaging in illegal or unethical practices, such as prescribing on the basis of an online questionnaire or failing to adequately disclose identifying information, the Internet pharmacy is not prohibited from operating in other states. To stop such practices, each affected state must individually bring action against the Internet pharmacy. As a result, to prevent one Internet pharmacy from doing business nationwide, the Attorney General in every state would have to file a lawsuit in his or her respective state court. Five federal agencies have authority to regulate and enforce U.S. laws that could be applied to the sale of prescription drugs on the Internet. Since Internet pharmacies first began operation in early 1999, FDA, Justice, DEA, Customs, and FTC have increased their efforts to respond to public health concerns about the illegal sale of prescription drugs on the Internet.FDA has taken enforcement actions against Internet pharmacies selling prescription drugs, Justice has prosecuted Internet pharmacies and physicians for dispensing medications without a valid prescription, DEA has investigated Internet pharmacies for illegal distribution of controlled substances, Customs has increased its seizure of packages that contain drugs entering the country, and FTC has negotiated settlements with Internet pharmacies for making deceptive health claims. While these agencies’ contributions are important, their efforts sometimes do not support each other. For instance, to conserve its resources FDA routinely releases packages of prescription drugs that Customs has detained because they may have been obtained illegally from foreign Internet pharmacies. Such uncoordinated program efforts can waste scarce resources, confuse and frustrate enforcement program administrators and customers, and limit the overall effectiveness of federal enforcement efforts. FDA has recently increased its monitoring and investigation of Internet pharmacies to determine if they are involved in illegal sales of prescription drugs. FDA has primary responsibility for regulating the sale, importation, and distribution of prescription drugs, including those sold on the Internet. In July 1999, FDA testified before the Congress that it did not generally regulate the practice of pharmacy or the practice of medicine. Accordingly, FDA activities regarding the sale of drugs over the Internet had until then focused on unapproved drugs. As of April 2000, however, FDA had 54 ongoing investigations of Internet pharmacies that may be illegally selling prescription drugs. FDA has also referred to Justice for possible criminal prosecution approximately 33 cases involving over 100 Internet pharmacies that may be illegally selling prescription drugs. FDA’s criminal investigations of online pharmacies have, to date, resulted in the indictment and/or arrest of eight individuals, two of whom have been convicted. In addition, FDA is seeking $10 million in fiscal year 2001 to fund 77 staff positions that would be dedicated to investigating and taking enforcement actions against Internet pharmacies. Justice has increased its prosecution of Internet pharmacies illegally selling prescription drugs. Under the FDCA, a prescription drug is considered misbranded if it is not dispensed pursuant to a valid prescription under the professional supervision of a licensed practitioner. In July 1999, Justice testified before the Congress that it was examining its legal basis for prosecuting noncompliant Internet pharmacies and violative online prescribing practices. Since that time, according to FDA officials, 22 of the 33 criminal investigations FDA referred to Justice have been actively pursued. Two of the 33 cases were declined by Justice and are being prosecuted as criminal cases by local district attorneys, and 9 were referred to the state of Florida. In addition, Justice filed two cases involving the illegal sale of prescription drugs over the Internet in 1999 and is investigating approximately 20 more cases. Since May 2000, Justice has brought charges against, or obtained convictions of, individuals in three cases involving the sale of prescription drugs by Internet pharmacies without a prescription or the distribution of misbranded drugs. While DEA has no efforts formally dedicated to Internet issues, it has initiated 20 investigations of the use of the Internet for the illegal sale of controlled substances during the last 15 months. DEA has been particularly concerned about Internet pharmacies that are affiliated with physicians who prescribe controlled substances without examining patients. For instance, in July 1999 a DEA investigation led to the indictment of a Maryland doctor on 34 counts of providing controlled substances to patients worldwide in response to requests made over the Internet. Because Maryland requires that doctors examine patients before prescribing medications, the doctor’s prescriptions were not considered to be legitimately provided. The physician’s conduct on the Internet also violated an essential requirement of federal law, which is that controlled substances must be dispensed only with a valid prescription. The U.S. Customs Service, which is responsible for inspecting packages shipped to the United States from foreign countries, has increased its seizures of prescription drugs from overseas. Customs officials report that the number of drug shipments seized increased about 450 percent between 1998 and 1999—from 2,139 to 9,725. Most of these seizures involved controlled substances. Because of the large volume, Customs is able to examine only a fraction of the packages entering the United States daily and cannot determine how many of its drug seizures involve prescription drugs purchased from Internet pharmacies. Nevertheless, Customs officials believe that the Internet is playing a role in the increase in illegal drug importation. According to Customs officials, fiscal year 2000 seizures are on pace to equal or surpass 1999 levels. FTC reports that it is monitoring Internet pharmacies for compliance with the Federal Trade Commission Act, conducting investigations, and making referrals to state and federal authorities. FTC is responsible for combating unfair or deceptive trade practices, including those on the Internet, such as misrepresentation of online pharmacy privacy practices. In 1999, FTC referred two Internet pharmacies to state regulatory boards. This year, FTC charged individuals and Internet pharmacies with making false promotional claims and other violations. Recently, the operators of these Internet pharmacies agreed to settle out of court. According to the settlement agreement, the defendants are barred from misrepresenting medical and pharmaceutical arrangements and any material fact about the scope and nature of the defendants’ goods, services, or facilities. The sale of prescription drugs to U.S. residents by foreign Internet pharmacies poses the most difficult challenge for U.S. law enforcement authorities because the seller is not located within U.S. boundaries. Many prescription drugs available from foreign Internet pharmacies are either products for which there is no U.S.-approved counterpart or foreign versions of FDA-approved drugs. In either case, these drugs are not approved for use in the United States, and therefore it is illegal for a foreign Internet pharmacy to ship these products to the United States. In addition, federal law prohibits the sale of prescription drugs to U.S. citizens without a valid prescription. Although FDA officials said that the agency has jurisdiction over a resident in a foreign country who sells to a U.S. resident in violation of the FDCA, from a practical standpoint, FDA is hard-pressed to enforce U.S. laws against foreign sellers.As a result, FDA enforcement efforts against foreign Internet pharmacies have been limited mostly to requesting the foreign government to take action against the seller of the product. FDA has also posted information on its Web site to help educate consumers about safely purchasing drugs from Internet pharmacies. FDA officials have sent 23 letters to operators of foreign Internet pharmacies warning them that they may be engaged in illegal activities, such as offering to sell prescription drugs to U.S. citizens without a valid, or in some cases without any, prescription. Copies of each letter were sent to regulatory officials in the country in which the pharmacy was based. In response, two Internet pharmacies said they will cease their sales to U.S. residents, and a third said it has ceased its sales regarding one drug but is still evaluating how it will handle other products. FDA has since requested that Customs detain packages from these Internet pharmacies. Customs has been successful in working with one foreign government to shut down its Internet pharmacies that were illegally selling prescription drugs to U.S. consumers. In January 2000, Customs assisted Thailand authorities in the execution of search and arrest warrants against seven Internet pharmacies, resulting in the arrest of 22 Thai citizens for violating Thailand’s drug and export laws and 6 people in the United States accused of buying drugs from the Thailand Internet pharmacy. U.S. and Thailand officials seized more than 2.5 million doses of prescription drugs and 245 parcels ready for shipment to the United States. According to FDA, it is illegal for a foreign-based Internet pharmacy to sell prescription drugs to consumers in the United States if those drugs are unapproved or are not dispensed pursuant to a valid prescription. But FDA permits patients and their physicians to obtain small quantities of drugs sold abroad, but not approved in the United States, for the treatment of a serious condition for which effective treatment may not be available domestically. FDA’s approach has been applied to products that do not represent an unreasonable risk and for which there is no known commercialization or promotion to U.S. residents. Further, a patient seeking to import such a product must provide to FDA the name of the licensed physician in the United States responsible for his or her treatment with the unapproved drug or provide evidence that the product is for continuation of a treatment begun in a foreign country. FDA has acknowledged that its guidance concerning importing prescription drugs through the mail has been inconsistently applied. At many Customs mail centers, FDA personnel rely on Customs officials to detain suspicious drug imports for FDA screening. Although prescription drugs ordered from foreign Internet pharmacies may not meet FDA’s criteria for importation under the personal use exemption, FDA personnel routinely release illegally imported prescription drugs detained by Customs officials. FDA has determined that the use of agency resources to provide comprehensive coverage of illegally imported drugs for personal use is generally not justified. Instead, the agency’s enforcement priorities are focused on drugs intended for the commercial market and on fraudulent products and those that pose an unreasonable health risk. FDA’s inconsistent application of its personal use exemption frustrates Customs officials and does little to deter foreign Internet pharmacies trafficking in prescription drugs. Accordingly, FDA plans to take the necessary actions to eliminate, or at least mitigate to the extent possible, the inconsistent interpretation and application of its guidance and work more closely with Customs. FDA’s approach to regulation of imported prescription drugs could be affected by enactment of pending legislation intended to allow American consumers to import drugs from certain other countries. Specifically, the appropriations bill for FDA (H.R. 4461) includes provisions that could modify the circumstances under which the agency may notify individuals seeking to import drugs into the United States that they may be in violation of federal law. According to an FDA official, it is not currently clear how these provisions, if enacted, could affect FDA’s ability to prevent the importation of violative drugs. Initiatives at the state and federal levels offer several approaches for regulating Internet pharmacies. The organization representing state boards of pharmacy, NABP, has developed a voluntary program for certifying Internet pharmacies. In addition, state and federal officials believe that they need more authority, as well as information regarding the identity of Internet pharmacies, to protect the public’s health. The organization representing state Attorneys General, NAAG, has asked the federal government to expand the authority of its members to allow them to take action in federal court. In addition, the administration has announced a new initiative that would grant FDA broad new authority to better identify, investigate, and prosecute Internet pharmacies for the illegal sale of prescription drugs. Concerned that consumers have no assurance of the legitimacy of Internet pharmacies, NABP is attempting to provide consumers with an instant mechanism for verifying the licensure status of Internet pharmacies. NABP’s Verified Internet Pharmacy Practice Sites (VIPPS) is a voluntary program that certifies online pharmacies that comply with criteria that attempt to combine state licensing requirements with standards developed by NABP for pharmacies practicing on the Internet. To obtain VIPPS certification, an Internet pharmacy must comply with the licensing and inspection requirements of the state where it is physically located and of each state to which it dispenses pharmaceuticals; demonstrate compliance with 17 standards by, for example, ensuring patient rights to privacy, authenticating and maintaining the security of prescription orders, adhering to recognized quality assurance policy, and providing meaningful consultation between customers and pharmacists; undergo an on-site inspection; develop a postcertification quality assurance program; and submit to continuing random inspections throughout a 3-year certification period. VIPPS-certified pharmacies are identified by the VIPPS hyperlink seal displayed on both their and NABP’s Web sites.Since VIPPS began in the fall of 1999, its seals have been presented to 11 Internet pharmacies, and 25 Internet pharmacies have submitted applications to display the seal. NAAG strongly supports the VIPPS program but maintains that the most important tool the federal government can give the states is nationwide injunctive relief. Modeled on the federal telemarketing statute, nationwide injunctive relief is an approach that would allow state Attorneys General to take action in federal court; if they were successful, an Internet pharmacy would be prevented from illegally selling prescription drugs nationwide. Two federal proposals would amend the FDCA to require an Internet pharmacy engaged in interstate commerce to include certain identifying language on its Web site. The Internet Pharmacy Consumer Protection Act (H.R. 2763) would amend the FDCA to require an Internet pharmacy engaged in interstate commerce to include a page on its Web site providing the following information: the name, address, and telephone number of the pharmacy’s principal each state in which the pharmacy is authorized by law to dispense the name of each pharmacist and the state(s) in which the individual is if the site offers to provide prescriptions after medical consultation, the name of each prescriber, the state(s) in which the prescriber is licensed, and the health professions in which the individual holds such licenses. Also, under this act a state would have primary enforcement responsibility for any violation involving the purchase of a prescription drug made within the state, provided the state had requirements at least as stringent as those specified in the act and adequate procedures for enforcing those requirements. In addition, the administration has developed a bill aimed at providing consumers the protections they enjoy when they go to a drugstore to have their prescriptions filled. For example, when consumers walk into a drugstore to have a prescription filled, they know the identity and location of the pharmacy, and the license on the wall provides visual assurance that the pharmacy meets certain health and safety requirements in that state. Under the Internet Prescription Drug Sales Act of 2000, Internet pharmacies would be required to be licensed in each state where they do business; comply with all applicable state and federal requirements, including the requirement to dispense drugs only pursuant to a valid prescription; and disclose identifying information to consumers. Internet pharmacies also would be required to notify FDA and all applicable state boards of pharmacy prior to launching a new Web site.Internet pharmacies that met all of the requirements would be able to post on their Web site a declaration that they had made the required notifications. FDA would designate one or more private nonprofit organizations or state agencies to verify licensing information included in notifications and to examine and inspect the records and facilities of Internet pharmacies. Internet pharmacies that do not meet notification and disclosure requirements or that sell prescription drugs without a valid prescription could face penalties as high as $500,000 for each violation. While it supports the Internet Prescription Drug Sales Act of 2000, Justice officials have recommended that it be modified. Prescription drug sales from Internet pharmacies often rely on credit card transactions processed by U.S. banks and credit card networks. To enhance its ability to investigate and stop payment for prescription drugs purchased illegally, Justice has recommended that federal law be amended to permit the Attorney General to seek injunctions against certain financial transactions traceable to unlawful online drug sales. According to Justice officials, if the Department and financial institutions can stop even some of the credit card orders for the illicit sale of prescription drugs and controlled substances, the operations of some “rogue” Internet pharmacies may be disrupted significantly. The unique qualities of the Internet pose new challenges for enforcing state pharmacy and medical practice laws because they allow pharmacies and physicians to reach consumers across state and international borders and remain anonymous. Internet pharmacies that fail to obtain licensure in the states where they operate may violate state law. But the Internet pharmacies that are affiliated with physicians that prescribe on the basis of an online questionnaire and those that dispense drugs without a prescription pose the most potential harm to consumers. Dispensing prescription drugs without adequate physician supervision increases the risk of consumers’ suffering adverse events, including side effects from inappropriately prescribed medications and misbranded or contaminated drugs. Some states have taken action to stop Internet pharmacies that offer online prescribing services from selling prescription drugs to residents of their state. But the real difficulty lies in identifying responsible parties and enforcing laws across state boundaries. Enforcement actions by federal agencies have begun addressing the illegal prescribing and dispensing of prescription drugs by domestic Internet pharmacies and their affiliated physicians. Enactment of federal legislation requiring Internet pharmacies to disclose, at a minimum, who they are, where they are licensed, and how they will secure personal health information of consumers would assist state and federal authorities in enforcing existing laws. In addition, federal agencies have taken actions to address the illegal sale of prescription drugs from foreign Internet pharmacies. Cooperative efforts between federal agencies and a foreign government resulted in closing down some Internet pharmacies illegally selling prescription drugs to U.S. consumers. However, it is unclear whether these efforts will stem the flow of prescription drugs obtained illegally from other foreign sources. As a result, the sale of prescription drugs from foreign-based Internet pharmacies continues to pose difficulties for federal regulatory authorities. To help ensure that consumers and state and federal regulators can easily identify the operators of Web sites selling prescription drugs, the Congress should amend the FDCA to require that any pharmacy shipping prescription drugs to another state disclose certain information on its Internet site. The information disclosed should include the name, business address, and telephone number of the Internet pharmacy and its principal officers or owners, and the state(s) where the pharmacy is licensed to do business. In addition, where permissible by state law, Internet pharmacies that offer online prescribing services should also disclose the name, business address, and telephone number of each physician providing prescribing services, and the state(s) where the physician is licensed to practice medicine. The Internet Pharmacy Consumer Protection Act and the administration’s proposal would require Internet pharmacies to disclose this type of information. We obtained comments on a draft of this report, from FDA, Justice, FTC, and Customs, as well as NABP and FSMB. In general, they agreed that Internet pharmacies should be required to disclose pertinent information on their Web sites and thought that our report provided an informative summary of efforts to regulate Internet pharmacies. Some reviewers also provided technical comments, which we incorporated where appropriate. However, FDA suggested that our matter for consideration implied that online questionnaires were acceptable as long as the physician’s name was properly disclosed. We did not intend to imply that online prescribing was proper medical practice. Rather, our report notes that most state medical boards responding to our survey have already concluded that a physician who issues a prescription on the basis of a review of an online questionnaire has not satisfied the standard of good medical practice required by state law. In light of this, federal action does not appear necessary. The disclosure of the responsible parties should assist state regulatory bodies in enforcing their laws. FTC suggested that our matter for congressional consideration be expanded to recommend that the Congress grant states nationwide injunctive relief. Our report already discusses NAAG’s proposal that injunctive relief be modeled after the federal telemarketing statute. While the NAAG proposal may have some merit, an assessment of the implications of this proposal was beyond the scope of our study. FTC also recommended that the Congress enact federal legislation that would require consumer-oriented commercial Web sites that collect personal identifying information from or about consumers online, including Internet pharmacies, to comply with widely accepted fair information practices. Again, our study did not evaluate whether a federal consumer protection law was necessary or if existing state laws and regulations may already offer this type of consumer protection. NABP did not agree entirely with our assessment of the regulatory effectiveness of the state boards of pharmacy. It indicated that the boards, with additional funding and minor legislative changes, can regulate Internet pharmacies. Our study did not assess the regulatory effectiveness of individual state pharmacy boards. Instead, we summarized responses by state pharmacy boards to our questions about their efforts to identify and take action against Internet pharmacies that are not complying with state law, and the challenges they face in regulating these pharmacies. Our report notes that many states identified limited resources and jurisdictional limitations as obstacles to enforcing their laws. NABP also suggested that our matter for congressional consideration include a requirement for independent verification of the information that Internet pharmacies are required to disclose on their Web sites. In our view, the current state regulatory framework would permit state boards to verify this information should they choose to do so. We are sending copies of this report to the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Jane E. Henney, Commissioner of FDA; the Honorable Janet Reno, Attorney General; the Honorable Donnie R. Marshall, Administrator of the DEA; the Honorable Robert Pitofsky, Chairman of the FTC; the Honorable Raymond W. Kelly, Commissioner of the U.S. Customs Service; the Honorable Kenneth C. Weaver, Chief Postal Inspector; appropriate congressional committees; and other interested parties. We will make copies available to others upon request. If you or your staffs have any questions about this report or would like additional information, please call me at (202) 512-7119 or John Hansen at (202) 512-7105. See appendix V for another GAO contact and staff acknowledgments. To obtain information on the number of pharmacies practicing on the Internet, we conducted searches of the World Wide Web and obtained a list of 235 Internet pharmacies that the National Association of Boards of Pharmacy (NABP) had identified by searching the Web and a list of 94 Internet pharmacies identified by staff of the House Committee on Commerce by searching the Web. After eliminating duplicate Web sites, we reviewed 296 potential sites between November and December 1999. Sites needed to meet two criteria to be included in our survey. First, they had to sell prescription drugs directly to consumers. Second, they had to be anchor sites (actual providers of services) and not portal sites (independent Web pages that connect to a provider). Most portal sites are paid a commission by anchor sites for displaying an advertisement or taking the user to the service provider’s site through a “click through.” We excluded 129 Web sites from our survey because they did not meet these criteria. See table 2 for details on our analysis of the Web sites that we excluded. In April 2000, we obtained a list of 326 Web sites that FDA identified during March 2000. We reviewed all the sites on FDA’s list and compared it to the list of Internet pharmacies we had previously compiled. We found 117 Internet pharmacies that duplicated pharmacies on our list. We also excluded 186 Web sites that did not meet our two criteria and added the remaining 23 Internet pharmacies to our list. To categorize Internet pharmacies, we analyzed information on the Web site to determine if the Internet pharmacy (1) required a prescription from the user’s physician to dispense a prescription drug, (2) in the absence of a prescription, required the user to complete an online questionnaire to obtain a prescription, or (3) dispensed prescription drugs without a prescription. We also collected data on the types of information available on each Internet pharmacy Web site, including information about the pharmacy’s licensure status, its mailing address and telephone number, and the cost of issuing a prescription. Using the domain name from the uniform resource locator, we performed online queries of Network Solutions, Inc. (one of the primary registrars for domain names) to obtain the name, address, and telephone number of the registrant of each Internet pharmacy. We then telephoned all U.S.-based Internet pharmacies to obtain information on the states in which they dispensed prescription drugs and the states in which they were licensed or registered. See table 3 for details on our licensure information inquiry. Finally, we clustered Internet pharmacies by state and asked the pharmacy boards in the 12 states—10 of these had the largest number of licensed/registered Internet pharmacies—to verify the licensure status of each pharmacy that told us it was licensed in the state. To assess state efforts to regulate Internet pharmacies and physicians prescribing over the Internet, we conducted two mail surveys in December 1999. To obtain information on state efforts to identify, monitor, and regulate Internet pharmacies, we surveyed pharmacy boards in all 50 states and the District Columbia. After making follow-up telephone calls, we received 50 surveys from the pharmacy boards in 49 states and the District Columbia, or 98 percent of those we surveyed. The survey and survey results are presented in appendix III. We also interviewed the executive directors and representatives of the state pharmacy boards in nine states— Alabama, Iowa, Maryland, New York, North Dakota, Oregon, Texas, Virginia, Washington—and the District of Columbia. In addition, we interviewed and obtained information from representatives of the NABP, the American Pharmaceutical Association, the National Association of Attorneys General, pharmaceutical manufacturers, as well as representatives of several Internet pharmacies. To obtain information on state efforts to oversee physician prescribing practices on the Internet, we surveyed the 62 medical boards and boards of osteopathy in the 50 states and the District of Columbia.After follow-up telephone calls, we received 45 surveys from the medical boards in 39 states, or 73 percent of those we surveyed. The survey and survey results are presented in appendix IV. We also interviewed officials with the medical boards in five states: California, Colorado, Maryland, Virginia, and Wisconsin. In addition, we interviewed and obtained information from representatives of the American Medical Association and the Federation of State Medical Boards (FSMB). To assess federal efforts to oversee pharmacies and physicians practicing on the Internet, we obtained information from officials from the Food and Drug Administration; the Federal Trade Commission; the Department of Justice, including the Drug Enforcement Administration; the U.S. Customs Service; and the U.S. Postal Service. We also reviewed the report of the President’s Working Group on Unlawful Conduct on the Internet. The availability of prescription drugs on the Internet has attracted the attention of several professional associations. As a result, over the past year, several associations have convened meetings of representatives of professional, regulatory, law enforcement, and private sector entities to discuss issues related to the practice of pharmacy and medicine on the Internet. We attended the May 1999 NABP annual conference, its September 1999 Executive Board meeting, and its November 1999 Internet Healthcare Summit 2000 to obtain information on the regulatory landscape for Internet pharmacy practice sites and the Verified Internet Pharmacy Practice Sites program. In January 2000, we attended a meeting convened by the FSMB of top officials from various government, medical, and public entities to discuss the efforts of state and federal agencies to regulate pharmacies and physicians practicing on the Internet. We also attended sessions of the March 2000 Symposium on Healthcare Internet and E- Commerce and the April 2000 Drug Information Association. We conducted our work from May 1999 through September 2000 in accordance with generally accepted government auditing standards. Neither in-state nor out-of-state physicians may prescribe to state residents without meeting the patient, even if the patient completes an online questionnaire. Internet exchange does not qualify as an initial medical examination, and no legitimate patient/physician relationship is established by it. Physicians prescribing a specific drug to residents without being licensed in the state may be criminally liable. Physicians prescribing on the Internet must follow standards of care. AG filed suit against four out-of-state online pharmacies for selling, prescribing, dispensing, and delivering prescription drugs without the pharmacies or physicians being licensed and with no physical examination. Referred one physician to the medical board in another state and obtained an injunction against a physician; the Kansas Board of Healing Arts also filed a lawsuit against a physician for the unauthorized practice of medicine. AG filed lawsuits against 10 online pharmacies and obtained restraining orders against the companies to stop them from doing business in Kansas; filed lawsuits against 7 companies and individuals selling prescription drugs over the Internet. Dispensing medication without physical examination represents conduct that is inconsistent with the prevailing and usually accepted standards of care and may be indicative of professional or medical incompetence. AG filed notices of intended action against 10 Internet pharmacies for illegally dispensing prescription drugs. Referred Internet pharmacy(ies) to AG for possible criminal prosecution. AG filed suit and obtained permanent injunctions against two online pharmacies and physicians for practicing without state licenses. Interviewed two physicians and suggested they stop prescribing over the Internet; they complied. AG filed suits charging nine Internet pharmacies with consumer fraud violations for selling prescription drugs over the Internet without a state license. Adopted regulations prohibiting physicians from prescribing or dispensing controlled substances or dangerous drugs to patients they have not examined and diagnosed in person; pharmacy board adopted rules for the sale of drugs online, requiring licensure or registration of pharmacy and disclosure. An Ohio doctor was indicted on 64 felony counts of selling dangerous drugs and drug trafficking over the Internet. The Medical Board may have his license revoked. AG filed lawsuits against three online companies and various pharmacies and physicians for practicing without proper licensing. The following individuals made important contributions to this report: John C. Hansen directed the work; Claude B. Hayeck collected information on federal efforts and, along with Darryl Joyce, surveyed state pharmacy boards; Renalyn A. Cuadro assisted in the surveys of Internet pharmacies and state medical boards; Susan Lawes guided survey development; Joan Vogel compiled and analyzed state pharmacy and medical board survey data; and Julian Klazkin served as attorney adviser. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | The first Internet pharmacies began online service in early 1999. Public health officials are concerned about Internet pharmacies that do not adhere to state licensing requirements and standards. Public officials are also concerned about the validity of prescriptions and international drugs that are not approved in the United States being sent by mail. The unique qualities of the Internet pose new challenges for enforcing state pharmacy and medical practice laws because they allow pharmacies and physicians to reach consumers across state and international borders and remain anonymous. Congress is considering legislation to strengthen oversight of Internet pharmacies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Shortages of chemical and biological defense equipment are a long-standing problem. After the Persian Gulf Conflict, the Army changed its regulations in an attempt to ensure that early-deploying units would have sufficient equipment on hand upon deployment. This direction, contained in U.S. Forces Command Regulation 700-2, has not been universally implemented. Presently, neither the Army’s more than five active divisions composing the crisis response force nor the early-deploying Army reserve units we visited had complied with the new stocking level requirements. All had shortages of critical equipment; three of the more than five active divisions had 50 percent or greater shortages of protective suits, and shortages of other critical items were as high as 84 percent, depending on the unit and the item. This equipment is normally procured with operation and maintenance funds. These shortages occurred primarily because unit commanders consistently diverted operation and maintenance funds to meet what they considered higher priority requirements, such as base operating costs, quality-of-life considerations, and costs associated with other-than-war deployments such as those to Haiti and Somalia. Relative to the DOD budget, the cost of purchasing this protective equipment is low. Early-deploying active divisions in the continental United States could meet current stocking requirements for an additional cost of about $15 million. However, unless funds are specifically designated for chemical and biological defense equipment, we do not believe unit commanders will spend operation and maintenance funds for this purpose. The shortages of on-hand stock are exacerbated by inadequate installation warehouse space for equipment storage, poor inventorying and reordering techniques, shelf-life limitations, and difficulty in maintaining appropriate protective clothing sizes. The Army is presently considering decreasing units’ stocking requirements to the levels needed to support only each early-deploying division’s ready brigade and relying on depots to provide the additional equipment needed on a “just-in-time” basis before deployment. Other approaches under consideration by the Army include funding these equipment purchases through procurement accounts, and transferring responsibility for purchasing and storing this material on Army installations to the Defense Logistics Agency. New and improved equipment is needed to overcome some DOD defensive shortfalls, and DOD is having difficulty meeting all of its planned chemical and biological defense research goals. Efforts to improve the management of the materiel development and acquisition process have so far had limited results and will not attain their full effect until at least fiscal year 1998. In response to lessons learned in the Gulf War, Congress directed DOD to improve the coordination of chemical and biological doctrine, requirements, research, development, and acquisition among DOD and the military services. DOD has acted. During 1994 and 1995, it established the Joint Service Integration Group to prioritize chemical and biological defense research efforts and develop a modernization plan and the Joint Service Materiel Group to develop research, development, acquisition, and logistics support plans. The activities of these two groups are overseen by a single DOD office —the Assistant Secretary of Defense (Atomic Energy)(Chemical and Biological Matters). While these groups have begun to implement the congressional requirements of P.L. 103-160, progress has been slower than expected. At the time of our review, the Joint Service Integration Group expected to produce during 1996 its proposed (1) list of chemical and biological defense research priorities and (2) joint service modernization plan and operational strategy. The Joint Service Materiel Group expects to deliver its proposed plan to guide chemical and biological defense research, development, and acquisition in October 1996. Consolidated research and modernization plans are important for avoiding duplication among the services and otherwise achieving the most effective use of limited resources. It is unclear whether or when DOD will approve these plans. However, DOD officials acknowledged that it will be fiscal year 1998 at the earliest, about 5 years after the law was passed, before DOD can begin formal budgetary implementation of these plans. DOD officials told us progress by these groups has been adversely affected by personnel shortages and collateral duties assigned to the staff. DOD efforts to field specific equipment and conduct research to address chemical and biological defense deficiencies have produced mixed results. On the positive side, DOD began to field the Biological Integrated Detection System in January 1996 and expects to complete the initial purchase of 38 systems by September 1996. However, DOD has not succeeded in fielding other needed equipment and systems designed to address critical battlefield deficiencies identified during the Persian Gulf Conflict and earlier. For example, work initiated in 1978 to develop an Automatic Chemical Agent Alarm to provide visual, audio, and command-communicated warnings of chemical agents remains incomplete. Because of service decisions to fund other priorities, DOD has approved and acquired only 103 of the more than 200 FOX mobile reconnaissance systems originally planned. Of the 11 chemical and biological defense research goals listed in DOD’s 1995 Annual Report to the Congress, DOD met 5 by their expected completion date of January 1996. Some were not met. For example, a DOD attempt to develop a less corrosive and labor-intensive decontaminate solution is now not expected to be completed until 2002. Chemical and biological defense training at all levels has been a constant problem for many years. For example, in 1986, DOD studies found that its forces were inadequately trained to conduct critical tasks. It took 6 months during the Persian Gulf Conflict to prepare forces in theater to defend against chemical and biological agents. However, these skills declined again after this conflict. A 1993 Army Chemical School study found that a combined arms force of infantry, artillery, and support units would have extreme difficulty performing its mission and suffer needless casualties if forced to operate in a chemical or biological environment because the force was only marginally trained. Army studies conducted from 1991 to 1995 showed serious weaknesses at all levels in chemical and biological defense skills. Our analysis of Army readiness evaluations, trend data, and lessons learned reports from this period also showed individuals, units, and commanders alike had problems performing basic tasks critical to surviving and operating in a chemical or biological environment. Despite DOD efforts— such as doctrinal changes and command directives—designed to improve training in defense against chemical and biological warfare since the Gulf War, U.S. forces continue to experience serious weaknesses in (1) donning protective masks, (2) deploying detection equipment, (3) providing medical care, (4) planning for the evacuation of casualties, and (5) including chemical and biological issues in operational plans. The Marine Corps also continues to experience similar problems. In addition to individual service training problems, the ability of joint forces to operate in a contaminated environment is questionable. In 1995, only 10 percent of the joint exercises conducted by four major CINCs included training to defend against chemical and biological agents. None of this training included all 23 required chemical/biological training tasks, and the majority included less than half of these tasks. Furthermore, these CINCs plan to include chemical/biological training in only 15 percent of the joint exercises for 1996. This clearly demonstrates the lack of chemical and biological warfare training at the joint service level. There are two fundamental reasons for this. First, CINCs generally consider chemical and biological training and preparedness to be the responsibility of the individual services. Second, CINCs believe that chemical and biological defense training is a low priority relative to their other needs. We examined the ability of U.S. Army medical units that support early-deploying Army divisions to provide treatment to casualties in a chemically and biologically contaminated environment. We found that these units often lacked needed equipment and training. Medical units supporting early-deploying Army divisions we visited often lacked critical equipment needed to treat casualties in a chemically or biologically contaminated environment. For example, these units had only about 50 to 60 percent of their authorized patient treatment and decontamination kits. Some of the patient treatment kits on hand were missing critical items such as drugs used to treat casualties. Also, none of the units had any type of collective shelter to treat casualties in a contaminated environment. Army officials acknowledged that the inability to provide treatment in the forward area of battle would result in greater rates of injury and death. Old versions of collective shelters are unsuitable, unserviceable, and no longer in use; new shelters are not expected to be available until fiscal year 1997 at the earliest. Few Army physicians in the units we visited had received formal training on chemical and biological patient treatment beyond that provided by the Basic Medical Officer course. Further instruction on chemical and biological patient treatment is provided by the medical advanced course and the chemical and biological casualty management course. The latter course provides 6-1/2 days of classroom and field instruction needed to save lives, minimize injury, and conserve fighting strength in a chemical or biological warfare environment. During the Persian Gulf Conflict, this course was provided on an emergency basis to medical units already deployed to the Gulf. In 1995, 47 to 81 percent of Army physicians assigned to early-deploying units had not attended the medical advanced course, and 70 to 97 percent had not attended the casualty management course. Both the advanced and casualty management courses are optional, and according to Army medical officials, peacetime demands to provide care to service members and their dependents often prevented attendance. Also, the Army does not monitor those who attend the casualty management course, nor does it target this course toward those who need it most, such as those assigned to early-deploying units. DOD has inadequate stocks of vaccines for known threat agents, and an immunization policy established in 1993 that DOD so far has chosen not to implement. DOD’s program to vaccinate the force to protect them against biological agents will not be fully effective until these problems are resolved. Though DOD has identified which biological agents are critical threats and determined the amount of vaccines that should be stocked, we found that the amount of vaccines stocked remains insufficient to protect U.S. forces, as it was during the Persian Gulf Conflict. Problems also exist with regard to the vaccines available to DOD. Only a few biological agent vaccines have been approved by the Food and Drug Administration (FDA). Many remain in Investigational New Drug (IND) status. Although IND vaccines have long been safely administered to personnel working in DOD vaccine research and development programs, the FDA usually requires large-scale field trials in humans to demonstrate new drug safety and effectiveness before approval. DOD has not performed such field trials due to ethical and legal considerations. DOD officials said that they hoped to acquire a prime contractor during 1996 to subcontract vaccine production and do what is needed to obtain FDA approval of vaccines currently under investigation. Since the Persian Gulf Conflict, DOD has consolidated the funding and management of several biological warfare defense activities, including vaccines, under the new Joint Program Office for Biological Defense. In November 1993, DOD established a policy to stockpile sufficient biological agent vaccines and to inoculate service members assigned to high-threat areas or to early-deploying units before deployment. The JCS and other high-ranking DOD officials have not yet approved implementation of the immunization policy. The draft policy implementation plan is completed and is currently under review within DOD. However, this issue is highly controversial within DOD, and whether the implementation plan will be approved and carried out is unclear. Until that happens, service members in high-threat areas or designated for early deployment in a crisis will not be protected by approved vaccines against biological agents. The primary cause for the deficiencies in chemical and biological defense preparedness is a lack of emphasis up and down the line of command in DOD. In the final analysis, it is a matter of commanders’ military judgment to decide the relative significance of risks and to apply resources to counter those risks that the commander finds most compelling. DOD has decided to concentrate on other priorities and consequently to accept a greater risk regarding preparedness for operations on a contaminated battlefield. Chemical and biological defense funding allocations are being targeted by the Joint Staff and DOD for reduction in their attempts to fund other, higher priority programs. DOD allocates less than 1 percent of its total budget to chemical and biological defense. Annual funding for this area has decreased by over 30 percent in constant dollars since fiscal year 1992, from approximately $750 million in that fiscal year to $504 million in 1995. This reduction has occurred in spite of the current U.S. intelligence assessment that the chemical and biological warfare threat to U.S. forces is increasing and the importance of defending against the use of such agents in the changing worldwide military environment. Funding could decrease even further. On October 26, 1995, the Joint Requirements Oversight Council and the JCS Chairman proposed to the Office of the Secretary of Defense (OSD) a cut of $200 million for the next 5 years ($1 billion total) to the counterproliferation budget. The counterproliferation program element in the DOD budget includes funding for the joint nuclear, chemical, and biological defense program as well as vaccine procurement and other related counterproliferation support activities. If implemented, this cut would severely impair planned chemical and biological defense research and development efforts and reverse the progress that has been made in several areas, according to DOD sources. OSD supported only an $800 million cut over 5 years and sent the recommendation to the Secretary of Defense. On March 7, 1996, we were told that DOD was now considering a proposed funding reduction of $33 million. The battle staff chemical officer/chemical noncommissioned officers are a commander’s principal trainers and advisers on chemical and biological defense operations and equipment operations and maintenance. We found that chemical and biological officer staff positions are being eliminated and that when filled, staff officers occupying the position are frequently assigned collateral tasks that reduces the time available to manage chemical and biological defense activities. At U.S. Army Forces Command and U.S. Army III Corps headquarters, for example, chemical staff positions are being reduced. Also, DOD officials told us that the Joint Service Integration and Joint Service Materiel Groups have made limited progress largely because not enough personnel are assigned to them and collateral duties are assigned to the staff. We also found that chemical officers assigned to a CINC’s staff were frequently tasked with duties not related to chemical and biological defense. The lower emphasis given to chemical and biological matters is also demonstrated by weaknesses in the methods used to monitor their status. DOD’s current system for reporting readiness to the Joint Staff is the Status of Resources and Training System (SORTS). We found that the effectiveness of SORTS for evaluating unit chemical and biological defense readiness is limited largely because (1) it allows commanders to be subjective in their evaluations, (2) it allows commanders to determine for themselves which equipment is critical, and (3) reporting remains optional at the division level. We also found that after-action and lessons-learned reports and operational readiness evaluations of chemical and biological training are flawed. At the U.S. Army Reserve Command there is no chemical or biological defense position. Consequently, the U.S. Army Reserve Command does not effectively monitor the chemical and biological defense status of reserve forces. The priority given to chemical and biological defense varied widely. Most CINCs assign chemical and biological defense a lower priority than other threats. Even though the Joint Staff has tasked CINCs to ensure that their forces are trained in certain joint chemical and biological defense tasks, the CINCs we visited considered such training a service responsibility. Several DOD officials said that U.S. forces still face a generally limited, although increasing, threat of chemical and biological warfare. At Army corps, division, and unit levels, the priority given to this area depended on the commander’s opinion of its relative importance. At one early-deploying division we visited, the commander had an aggressive system for chemical and biological training, monitoring, and reporting. At another, the commander had made a conscious decision to emphasize other areas, such as other-than-war deployments and quality-of-life considerations. As this unit was increasingly being asked to conduct operations other than war, the commander’s emphasis on the chemical and biological warfare threat declined. Officials at all levels said training in chemical and biological preparedness was not emphasized because of higher priority taskings, low levels of interest by higher headquarters, difficulty working in cumbersome and uncomfortable protective clothing and masks, the time-consuming nature of the training, and a heavy reliance on post-mobilization training and preparation. We have no means to determine whether increased emphasis on chemical and biological warfare defense is warranted at the expense of other priorities. This is a matter of military judgment by DOD and of funding priorities by DOD and the Congress. We anticipate that in our report due in April 1996, we will recommend that the Secretary of Defense reevaluate the low priority given to chemical and biological defense and consider adopting a single manager concept for the execution of the chemical and biological program given the increasing chemical and biological warfare threat and the continuing weakness in the military’s defense capability. Further, we anticipate recommending that the Secretary consider elevating the office for current oversight to its own Assistant Secretary of Defense level, rather than leaving it in its present position as part of the Office of the Assistant Secretary for Atomic Energy. We may make other recommendations concerning opportunities to improve the effectiveness of existing DOD chemical and biological activities. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed its assessment of U.S. forces' capability to fight and survive chemical and biological warfare. GAO noted that: (1) none of the Army's crisis-response or early-deployment units have complied with requirements for stocking equipment critical for fighting under chemical or biological warfare; (2) the Department of Defense (DOD) has established two joint service groups to prioritize chemical and biological defense research efforts, develop a modernization plan, and develop support plans; (3) although DOD has begun to field a biological agent detection system, it has not successfully fielded other needed equipment and systems to address critical battlefield deficiencies; (4) ground forces are inadequately trained to conduct critical tasks related to biological and chemical warfare, and there are serious weaknesses at all levels in chemical and biological defense skills; (5) medical units often lack the equipment and training needed to treat casualties resulting from chemical or biological contamination; (6) DOD has inadequate stocks of vaccines for known threat agents and has not implemented an immunization policy established in 1993; and (7) the primary cause for these deficiencies is a lack of emphasis along the DOD command chain, with DOD focusing its efforts and resources on other priorities. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The regulatory structure of the U.S. securities markets was established by the Securities Exchange Act of 1934, which created SEC as an independent agency to oversee the U.S. securities markets and their participants. Similarly, in 1974 the Commodity Exchange Act established CFTC as an independent agency to oversee the U.S. commodity futures and options markets. Both agencies have five-member commissions headed by chairpersons who are appointed by the President of the United States for 5-year terms. Among other things, the commissioners approve new SEC and SRO rules and amendments to existing rules. They also authorize enforcement actions. SEC and CFTC are headquartered in Washington, D.C. SEC has a combined total of 11 regional and district offices; CFTC has 5 regional offices. Within SEC and CFTC, the divisions of enforcement are responsible for investigating possible violations of the securities and futures laws, respectively. With their commissions’ approval, they litigate or settle actions against alleged violators in federal civil courts and in administrative actions. Typically, enforcement staff investigate alleged violations of law, prepare a memorandum for the commissioners that describes alleged violations, and, if appropriate, make recommendations for further action. When the commissions decide that a case warrants further action, they can authorize filing a civil suit against the alleged violator in federal district court or instituting a proceeding before an administrative law judge. If either the court or the administrative law judge finds that a defendant has violated securities or futures laws, it can issue a judgment ordering sanctions such as fines and disgorgements and, in the case of futures violations, restitution; it can also bar or suspend violators from the securities and futures industries. The collection process for delinquent debt begins when all or part of a fine or disgorgement becomes delinquent because the violator has failed to pay some or all of the amount due by the date ordered by the court or administrative law judge. If the court or administrative law judge has not specified a payment date and no stay has been entered, SEC considers the debt delinquent 10 days after the court enters the judgment. CFTC officials told us that absent an appeal, they consider the debt delinquent 15 or 60 days after the administrative law judge or court entered the judgment in administrative and civil cases, respectively. SEC and CFTC collect delinquent monetary judgments primarily through post-judgment litigation, negotiating payments with defendants, and making referrals to the Department of Treasury or the Department of Justice. In accordance with the Debt Collection Improvement Act of 1996, SEC and CFTC have each entered into an agreement with the Department of Treasury to improve collections. Under this act, federal agencies are required to submit all nontax debts that are 180 days delinquent to Treasury’s FMS. The act also requires that FMS either take appropriate steps to collect the debt or terminate collection actions. In addition to using traditional methods to collect these debts, such as sending demand letters and hiring private collection agencies, FMS can use TOP. Under TOP, FMS identifies federal payments, such as tax refunds, that are owed to individuals and applies the payments to their outstanding debt. All cases referred to FMS for collection are also eligible for referral to and servicing under TOP. FMS also uses collection agencies to negotiate compromise offers with individual debtors. A compromise offer is an agreement between a federal agency and an individual debtor, in which the federal agency agrees to discharge a debt by accepting less than the full amount. Once the collection agency negotiates a compromise offer with a debtor, it forwards the offer to FMS. In the absence of an agreement between FMS and the federal agency to approve compromise offers on its behalf, FMS refers the offer to the federal agency for final approval. The U.S. securities and futures markets are regulated under their respective statutes through a combination of self-regulation (subject to federal oversight) and direct federal regulation. This regulatory scheme was intended to give SROs responsibility for administering their own operations, including most of the daily oversight of the securities and futures markets and their participants. Two of the SROs—NASD and NFA—are associations that regulate registered securities and futures firms and oversee securities and futures professionals, respectively. The remaining SROs include national exchanges that operate the markets where securities and futures are traded. These SROs are primarily responsible for establishing the standards under which their members conduct business; monitoring the way that business is conducted; and bringing disciplinary actions against their members for violating applicable federal statutes, their own rules, and the rules promulgated by their federal regulator. SROs can impose fines and other sanctions against members that violate securities or futures laws or SRO rules, as applicable, through their enforcement and disciplinary processes. Some SROs’ disciplinary proceedings are decided by a hearing panel, which examines the evidence and decides on the appropriate sanction. SROs’ actions are usually initiated by a customer complaint, a compliance examination, market surveillance, regulatory filings, or a press report. SEC and CFTC have taken actions to improve their collection programs, addressing the three recommendations in our 2001 fines report. However, it was too early to assess the effectiveness of their actions. After we made our first recommendation, SEC took various steps, among them, implementing collections guidelines that were intended to ensure that eligible delinquent cases are referred to FMS, including TOP. But SEC’s actions have not ensured that all eligible cases are referred. To address our second recommendation, SEC developed procedures for responding to compromise offers submitted by FMS within 30 days. To address our third recommendation, CFTC implemented procedures for ensuring the timely referral of delinquent cases to FMS for collection. SEC implemented regulations, related procedures and guidelines, and a collections database intended to ensure that eligible delinquent cases are referred to FMS, including TOP, as required by the Debt Collection Improvement Act of 1996. However, SEC has focused on referring post- guidelines cases, and it was too early to assess the effectiveness of SEC’s strategy as it related to these cases. In contrast, SEC did not have a formal strategy for referring pre-guidelines cases and, further impeding its collection efforts, it did not have a reliable agencywide system for tracking monies owed in these cases. Recognizing that its system was unreliable, SEC has drafted a two-phase action plan under which it will implement a centralized agencywide tracking system for all delinquent debt. However, it has not established a time frame for fully implementing the computer system for the second phase of the plan. We recommended in our 2001 report that SEC take steps to ensure that regulations allowing SEC’s delinquent fines to be submitted to TOP be adopted so that SEC would benefit from the associated collection opportunities. At the time of our review, SEC officials had told us that they had rewritten their rules for using TOP but that they could not estimate when the rules would be approved by the commission or implemented. After we made our recommendation, SEC amended its debt collection regulations. In April 2002, SEC implemented related procedures to allow cases to be forwarded to TOP. Consistent with the Debt Collection Improvement Act of 1996, the procedures required that cases be referred to FMS after they had been delinquent for more than 180 days. SEC subsequently issued additional guidelines and implemented a collections database that were intended to ensure that eligible delinquent post- guidelines cases are referred to FMS, including TOP, within 180 days of becoming delinquent. SEC imposed the more stringent requirement on itself in recognition of the enhanced probability of collecting monies ordered on newer cases. The guidelines provided more detailed instructions for staff on how to pursue collections, specifying steps for referring eligible delinquent cases to FMS, including TOP, within 180 days. According to an agency official, the guidelines went into effect agencywide on September 2, 2002. SEC also created a collections database for all post-guidelines fines and disgorgement cases that is maintained by headquarters and each regional or district office, as applicable. The database tracks actions that staff have taken to recover debt on delinquent cases, including preparing cases for referral to FMS, and is used to help ensure that staff are following the new collections guidelines. SEC officials told us that the agency was tracking only post-guidelines cases because the database had limited storage capacity and could become unstable if too many cases were added. In addition, the agency has assigned attorneys and administrative staff to every office to maintain the database and its related collection activities for delinquent cases, including ensuring that eligible cases are referred to FMS and TOP in a timely manner. According to an agency official, these staff received training on using the guidelines in the fall of 2002. It was too early to fully assess the effectiveness of SEC’s strategy for tracking, collecting, and referring post-guidelines cases, because most of these cases were not yet 180 days delinquent. Based on a judgmental sample of 66 cases, we identified 4 delinquent fines and disgorgement cases valued at $4 million that were eligible for referral as of March 31, 2003. We found that SEC had referred two of the four cases within the 180-day time frame and was preparing the other two for referral. Although SEC had developed controls to better ensure that eligible post- guidelines cases were promptly referred to FMS and TOP, it had not developed a formal strategy for referring eligible pre-guidelines cases. Such a strategy would include prioritizing cases based on their collection potential and establishing time frames for making the referrals. Further impeding its collection efforts, SEC’s original system for tracking monies owed in pre-guidelines cases—DPTS—was not reliable. As a result, SEC could not identify all the cases that had not been referred to FMS and TOP. SEC officials told us that the agency’s April 2002 procedures applied to the pre-guidelines cases and that agency attorneys had followed these procedures in referring some pre-guidelines cases to Treasury. But SEC did not know the extent to which the procedures were being followed or whether eligible cases were not being referred. They explained that the attorneys would know the status of the cases assigned to them but that no agencywide information was available. They also told us that they expected all eligible cases to be referred to FMS and TOP eventually but noted that they had not prioritized the cases for referral or established time frames for referring them. Neither we nor SEC could determine with any certainty the extent to which eligible pre-guidelines cases were not being referred to FMS and TOP due to the unreliability of DPTS. Using DPTS, the only information available, we identified about 900 pre-guidelines cases valued at about $2.8 billion that were 180 days past due and that might be eligible for referral. As of January 31, 2003, almost 54 percent of these cases were over 3 years old based on their judgment date, which, in the absence of better data, we used as a rough proxy for the delinquency date. SEC officials emphasized that these numbers do not accurately reflect the number of pre-guidelines cases eligible for referral to FMS and TOP. They said that some of the cases were ineligible for referral because they were on appeal, in post-judgment litigation, or had a receiver appointed to marshal and distribute assets. In addition, many cases might already have been referred for collection. SEC officials also pointed out that our calculations of the age of cases were inaccurate because we relied on the judgment date rather than the delinquency date, which is not tracked in DPTS. We recognize that many factors affect the accuracy of DPTS, including some that might not be mentioned here. However, we are reporting these numbers as the best information available. Both GAO and SEC have recognized DPTS’s lack of reliability. Our 2002 disgorgement report and a January 2003 report commissioned by the SEC Inspector General found that DPTS was not complete and accurate and could not be relied upon for financial accounting and reporting purposes. Recognizing that the agency did not have a system that provided an accurate assessment of levied amounts and payments (among other things), SEC developed a draft action plan for implementing a new system to replace DPTS. The April 2003 draft plan calls for implementing a comprehensive centralized system for tracking, documenting, and reporting on fines and disgorgements ordered, paid, and disbursed in SEC enforcement actions. The agency had been taking steps to address the milestones in the plan. If the plan is effectively implemented, the agency should have a tool for accurately identifying uncollected pre-guidelines cases for referral to FMS and TOP for collection. SEC’s action plan has been divided into two phases. In the first phase, SEC is tentatively scheduled to replace DPTS by the end of fiscal year 2003. SEC officials described the replacement system as a comprehensive case tracking, record-keeping, and reporting system for fines and disgorgements ordered, paid, and distributed. They said that the system will be integrated with a database maintained by the Division of Enforcement. The replacement system is intended to, among other things, maintain the data on debt needed for general reporting and management purposes. According to SEC officials, one benefit of the replacement system will be to assist the agency in managing its delinquent cases. However, SEC will continue to rely on its new collections database, which tracks collection efforts on post-guidelines cases, to ensure the timely referral of these cases to FMS and TOP until phase two of the action plan is implemented. In phase two, SEC plans a comprehensive upgrade to its case tracking system, which will be integrated with several other databases, including the new collections database. SEC expects to begin the requirements analysis for the phase two computer system in fiscal year 2004 but has not established a milestone for completing this analysis. After the requirements analysis is complete, SEC plans to establish an implementation date for the system. We recommended in our July 2001 report that SEC continue to work with FMS to ensure that compromise offers presented by FMS are approved in a timely manner. Our recommendation resulted from a finding that SEC did not always respond to compromise offers promptly and that as a result some debts had never been collected. For example, we reported that FMS waited from between 42 and 327 days for SEC’s decisions on three compromise offers. But by the time SEC made its decisions, the debtors no longer had the money to pay the amounts specified in the compromise offers. To address this concern, in April 2001 FMS proposed securing delegation authority from SEC—that is, permission to approve compromise offers that SEC did not respond to within 30 days. In response to our recommendation, SEC took several steps to ensure that compromise offers are approved in a timely manner. First, in July 2001 SEC implemented procedures specifying the actions required to address a compromise offer, including a schedule to ensure that a decision is made within 30 days. For example, within 5 days of receiving an offer, SEC staff are to have made a final decision on whether to recommend the offer to the commission for approval. SEC also implemented controls to monitor the status of offers. When it receives a compromise offer from FMS, SEC enters the offer into a system that tracks information such as the date the offer was made, the name of the attorney reviewing the offer, the date the offer was referred to the commission for a final decision, and the date of the final decision. The Division of Enforcement’s chief counsel monitors the status of offers based on weekly reports generated from this system to ensure that follow-up action is taken to address any problems. Finally, SEC has designated two staff to respond to FMS inquiries about the status of compromise offers. It is still too early to determine the effectiveness of SEC’s actions. As of April 22, 2003, SEC had received four compromise offers from FMS under its new procedures. SEC and FMS data showed that SEC had responded to three of the offers within the 30-day guideline and to one offer within 40 days. The late offer represented a debt of $1.6 million, and the settlement offer was for $50,000. SEC staff told us that the agency ultimately rejected the offer, at least in part because of the disparity between the amount offered and the amount owed. SEC officials attributed the delay in responding to this offer to scheduling conflicts caused by the holiday season. The officials told us that the agency was in touch with FMS before the end of 30 days to indicate, on an informal basis, that the reply to the compromise offer would be delayed and that the offer would be rejected. FMS officials told us that they did not view SEC’s late response to this offer as a problem—that is, the delay did not represent weaknesses in agency policies, procedures, or controls. They said that SEC had shown marked improvement in responding to compromise offers and that as a result FMS was no longer seeking delegation authority from SEC. We recommended in our 2001 report that CFTC take steps to ensure that delinquent fines were promptly referred to FMS, including creating formal procedures that addressed both sending debts to FMS within the required time frames and requiring all of the necessary information from the Division of Enforcement on these debts. Our recommendation flowed from a finding in an April 2001 report by CFTC’s Inspector General showing that CFTC staff were not referring delinquent debts to FMS in a timely manner, potentially limiting FMS’s ability to collect the monies owed. The report also noted that CFTC’s collection procedures had not been updated to address referrals to FMS and, among other examples, identified a fine in the amount of $7 million that had not been referred to FMS for more than 2 years because of inadequate communication between CFTC’s Division of Enforcement and its Division of Trading and Markets. As we recommended, CFTC has improved its procedures for referring its debt to FMS in a timely manner and has taken steps to ensure that it has all the necessary enforcement information before making the referral. CFTC updated its collection procedures and implemented them in July 2002. They now include specific requirements for referring debt to FMS within 180 days of the date that the debt became delinquent. CFTC also implemented controls to ensure that it has identified all delinquent debt eligible for referral. For example, CFTC management reviews quarterly reports on the status of cases to ensure that all debts are referred to FMS within 180 days. According to CFTC officials, the agency’s shift of all debt collection responsibility from its Division of Trading and Markets to its Division of Enforcement streamlined its debt referral process. Although it is too early to fully assess the effectiveness of CFTC’s actions, our review of CFTC’s data on uncollected cases indicated that the agency had been referring all eligible debt to FMS within 180 days. As of April 24, 2003, CFTC had had four delinquent cases dating from the time its procedures went into effect. Using FMS’s data, we confirmed that the cases had been referred to FMS within 123 days. Also, a review of CFTC’s data of all delinquent cases levied before the procedures went into effect showed that CFTC had referred all eligible cases to FMS for collection. FMS officials told us that CFTC had been making debt referrals with complete information on all its cases. SEC and CFTC have taken steps to address our two recommendations for improving their oversight of SROs’ sanctioning practices. But SEC has not fully implemented our 1998 recommendation that it analyze industrywide data on SRO-imposed sanctions to examine disparities and help improve disciplinary programs. The agency has experienced technological problems that have hampered its ability to complete these analyses. In addition—and consistent with our 2001 recommendation—SEC and CFTC have been monitoring readmission applications to the securities and futures industries. However, at the time of our review neither had received any applications since changing their fine imposition practices. Also, SEC, CFTC, NASD, and NFA have controls designed to ensure that inappropriate readmissions do not occur. Further, while examining the application review process, we found weaknesses in controls over fingerprinting that could result in inappropriate admissions to the securities and futures industries. In our 1998 report, we recommended that SEC analyze industrywide information on disciplinary program sanctions, particularly fines, to identify possible disparities among the SROs and find ways to improve SROs’ disciplinary programs. We concluded that analyzing industrywide data could provide SEC with an additional tool to identify disparities among SROs that might require further review. We reported in 2001 that SEC had developed a database to collect information on SROs’ disciplinary actions. As of June 30, 2003, according to agency officials, SEC was still inputting information into its database but had not yet completed any analyses because technological difficulties had hampered its ability to collect sufficient data to perform the analyses. First, the database had a limited number of fields and therefore could not capture multiple disciplinary violations or multiple parties in a single case. In October 2002, SEC officials told us that they had addressed this limitation by enhancing the database to incorporate the required fields and were continuing to add disciplinary information to the database. However, in November 2002, the enhanced database failed because it could not support multiple users. SEC repaired the database, and agency officials told us that they expected to complete their first data analyses in the summer of 2003. The analyses are expected to show whether SROs impose similar fines and sanctions for similar violations. An SEC official said that the agency expects these analyses to supplement the information obtained during agency inspections of the SROs’ disciplinary programs. SEC officials told us that the agency is planning to use funds from its fiscal year 2003 budget increase to develop a new disciplinary database that will replace the current one. According to SEC officials, this new disciplinary database is expected to allow SROs to submit data on-line rather than having to send it to SEC to be entered by staff. This streamlined process is expected to reduce data entry errors. An SEC official told us that while planning had begun for the new disciplinary database, no completion date had been established. In our 2001 report, we recommended that SEC and CFTC periodically assess the pattern of readmission applications to ensure that the changes in NASD’s and NFA’s fine imposition practices do not result in any unintended consequences, such as inappropriate readmissions. NASD and NFA had stopped routinely assessing fines when barring individuals in October 1999 and December 1998, respectively, eliminating the related requirement that the fines be paid as a condition of reentry to the securities and futures industries. These fines had rarely been collected, because few violators ever sought reentry. We were concerned that because barred individuals were no longer required to pay a fine before reentry, they might be more willing to seek readmission. Consistent with our recommendation, SEC and CFTC have monitored readmission applications. They found, and we confirmed, that no individuals who were barred after the changes in NASD’s and NFA’s fine imposition practices had applied for reentry. Also, NASD’s and NFA’s application review processes included controls designed to ensure that inappropriate applications for reentry are not approved. Officials of both SROs told us that as part of their background checks they did a database search against the names of past and current registrants in both industries to determine whether the applicants had a disciplinary history. In addition, both SROs submitted applicants’ fingerprints to the Federal Bureau of Investigation (FBI) for a criminal background check. NASD and NFA required all individuals who had been suspended, expelled, or barred to be—at a minimum—sponsored by a registered firm before being considered for readmission. According to a CFTC official, finding a sponsor is difficult, as most firms would not hire an individual with a history of serious disciplinary problems, in part due to increased supervisory requirements and the risk of harming their reputations. SEC and CFTC were reviewing the applications of all individuals who had been statutorily disqualified from registration, including any barred individuals, and had the authority to reverse an admission decision made by NASD or NFA, respectively. SEC and CFTC officials told us that they would consider various factors when reviewing a readmission application, including the facts and circumstances of the case, the appropriateness of the proposed supervision, and the prospective employer’s ability to provide the proposed supervision. Officials from both agencies told us that if they were to begin receiving a large number of applications from barred applicants, they would reexamine the SROs’ fine imposition practices. While examining the application review process, we found that neither the related statutes, SEC, nor CFTC required the SROs to ensure that the fingerprints sent to the FBI for use in criminal history checks belonged to the applicants who submitted them. Further, in the absence of such a requirement, NASD, the New York Stock Exchange (NYSE), and NFA lacked related controls over fingerprinting, potentially allowing inappropriate persons to enter the securities and futures industries. The securities and futures laws require that applicants to these industries have their fingerprints taken and then sent for review to the FBI as part of a criminal background check. The goal of the criminal background check is to ensure that inappropriate individuals are not granted admission to the securities or futures industries. The statutes also require SRO member firms to be responsible for assuring that their personnel are fingerprinted. SEC and CFTC rules provide that applicants can satisfy this requirement by submitting fingerprints to the SROs who then send them to the FBI for processing. However, neither the statutes, SEC, nor CFTC require SROs to ensure that the fingerprints sent to the FBI for use in criminal history checks belong to the applicants who submitted them. In the absence of such a requirement, NASD, NYSE, and NFA have not imposed requirements on member firms to help ensure that the identity of the person being fingerprinted matches the fingerprints being submitted for FBI review. The SROs told us that, consistent with the law, they required their members to be fingerprinted and that these fingerprints were submitted to the FBI for assessment. NYSE officials emphasized that their members were in full compliance with the law and related regulations, which do not require specific controls. In the absence of specific requirements, firms have taken a variety of approaches to fingerprinting applicants. For example, while SEC and some SROs told us that most firms used their own personnel or police officers to obtain fingerprints, they said that a small number of firms may allow applicants to fingerprint themselves, a practice that provides an opportunity for individuals to perpetrate fraud by submitting someone else’s fingerprints instead of their own. According to SEC and CFTC officials, their agencies have trained staff in their headquarters and some regional offices that take fingerprints of their employees using approved fingerprinting kits. An NFA official also stated that NFA headquarters has trained staff that take fingerprints of industry applicants, verifying their identities as part of the process. The FBI also informed us that it suggests using law enforcement or other trained personnel to take fingerprints. SEC and NYSE also said that many reputable businesses provide fingerprinting services and that SRO member firms could contract with these businesses. In a 1996 CFTC review of NFA’s registration fitness program, CFTC recommended that NFA conduct a review to determine the feasibility of adopting controls to ensure that the fingerprints submitted for criminal history checks belonged to the applicant. NFA found that a number of obstacles stood in the way of establishing an effective program to verify fingerprints. According to an NFA official, the agency examined the procedures of the Bureau of Citizenship and Immigration Services of the Department of Homeland Security (formerly the Immigration and Naturalization Service) in responding to CFTC’s recommendation. On the basis of this examination, NFA concluded that it would not be cost- effective to replicate the bureau’s procedures. For example, unlike NFA, the bureau has fingerprinting sites throughout the country with trained employees to take fingerprints. As part of its review, NFA considered requiring an attestation form, which would include the fingerprinter’s name and address and the document used to verify the applicant’s identity. Ultimately, however, NFA concluded that such a form could be subject to forgery and would not provide assurance that the fingerprints belonged to the applicant. CFTC accepted NFA’s conclusions. NYSE and NFA officials described other obstacles to establishing controls over fingerprinting. They explained that space limitations on the FBI fingerprint card made it difficult to identify the person taking the fingerprints. Further, they said that the card provided space for the fingerprinter’s signature, which is often illegible, but not for the fingerprinter’s printed name or the name of another contact who could verify information related to the fingerprints. NYSE officials also said that the FBI could adjust its fingerprint card so that it required more complete contact information for the person taking the fingerprints. An NFA official also told us that because some SROs process registration applications both nationally and internationally, these SROs would not be able to establish enforceable rules regarding who should take fingerprints. We did not determine the extent to which individuals with a criminal history could submit someone else’s fingerprints and thus enter the securities or futures industries undetected. However, SEC and CFTC officials said that the SROs’ fingerprinting processes are vulnerable to such a practice because of the lack of controls for preventing applicants from using someone else’s fingerprints as their own. SRO officials said that existing systems were reasonably designed to prevent fraud but were not foolproof, adding that the potential cost of imposing any unduly restrictive requirements was a concern. Some SRO officials said that to the extent they are needed, SEC and CFTC should establish industrywide standards. NFA officials said that since weaknesses in fingerprinting procedures apply equally to the securities and futures industries, SEC and CFTC should establish comparable requirements to ensure that one industry is not at a disadvantage to the other. NYSE officials said that SEC rulemaking would be the most appropriate method for changes to fingerprinting procedures in the securities industry. To provide a more complete picture of efforts by securities and futures regulators to collect fines, we calculated the collection rates in two different ways. The collection rates for closed cases (cases with a final judgment order for which all collection actions were completed) for SEC,CFTC, and the SROs from January 1997 to August 2002 showed that the regulators collected most of the fines imposed. Broadening the analysis to include open cases (cases with a final judgment order that remained open while collection efforts continued) had the greatest impact on SEC’s and CFTC’s collection rates because of a few large uncollected fines. Our analysis of the collection rates highlights a theme introduced in an earlier report that the collection rate alone may not be a valid measure of the effectiveness of collection efforts, because collections can be influenced by factors that are outside regulators’ control. SEC, CFTC, and the SROs collected between 75 and 100 percent of all the fines imposed in closed cases. For these cases, collection efforts had ceased either because the fines had been collected in full or in part or were unlikely to be collected and thus had been written off as bad debts. As shown in table 1, SEC and CFTC collected about 94 and 99 percent, respectively, of the total dollars levied in cases closed from January 1997 through August 2002—the period immediately following the one covered in our 1998 fines report. These amounts represent an 11 and 18 percentage point increase, respectively, over the rates presented in the 1998 report, which covered the 1992–96 period. CFTC wrote off fewer fines as uncollectible in the more recent period, and almost all of its collected fines were paid in full. The eight securities and futures SROs for which data were available had the same or higher collection rates on closed cases in the most recent period compared with the earlier period. The Chicago Board of Trade’s collection rate showed significant improvement, increasing from 54 to 95 percent of the total dollars levied. Its collection rate for the 1992–96 period was heavily influenced by two large uncollected fines totaling $2.25 million. Excluding those two cases, the rate for this period would have been about 99 percent rather than 54 percent—much closer to the 95 percent rate for the more recent period. NASD’s and NFA’s rates also showed significant improvement, increasing 71 and 48 percentage points, respectively, over the rates presented in the 1998 report, which covered the 1992–96 period. However, NASD’s and NFA’s collection rates improved because, as we have noted, the regulators stopped routinely assessing fines when barring individuals from the securities and futures industry. These fines had been the most difficult to collect, because barred individuals had little incentive to pay them. SEC’s and CFTC’s collection rates were affected more than the SROs’ rates when we added open cases to our calculations. As shown in table 2, SEC collected about 40 percent of the total dollars levied in all cases, open and closed, from January 1997 through August 2002—54 percentage points less than its rate for closed cases. We examined SEC’s collection rates by year and found that the rates varied greatly over time because of a few large fines. (See appendix III for the collection rates of the securities regulators for open and closed cases by calendar year.) For example, in 1999 SEC collected 26 percent of the total fines levied in that year, but one uncollected fine of $123 million significantly lowered the rate. Had SEC been able to collect this one fine, its collection rate for 1999 would have been 89 percent (fig. 1). Also, in 2002, SEC collected 61 percent of all fines, but approximately half came from two payments made by two violators. Excluding these payments, the reported collection rate for 2002 would have been about 30 percent (fig. 1). To help control for the influence of large dollar amounts on SEC’s collection rates, we analyzed the number of cases paid in full and found that SEC had collected the full amount of the fine in the majority of cases it levied. For the entire period from 1997 through 2001, 72 percent of the fines levied had been paid in full. In 2002, 55 percent of the fines levied were paid in full. The rate may be lower for 2002 because SEC has had less time— approximately 4 months—to collect on cases levied through August 2002. CFTC collected about 45 percent of the total dollar amount of the fines it levied over the same period. Like SEC’s rate, CFTC’s was heavily influenced by a few large fines. A closer review of CFTC’s annual rates from January 1997 through August 2002 showed that the regulator collected between 2 and 90 percent of the total fines levied. (See appendix IV for the collection rates of the futures regulators for open and closed cases by calendar year.) But in 2000, when CFTC’s collection rate was just 2 percent, our calculations included a single uncollected fine of $90 million. Had CFTC been able to collect this one fine, its collection rate would have been 95 percent (fig. 2). Also, in 1998, when CFTC collected 90 percent of the total dollar amount levied through August 2002, one payment for $125 million heavily skewed the rate (fig. 2). Without this one payment and fine, CFTC’s reported collection rate would have been approximately 7 percent (fig. 2). To help control for the influence that large dollar amounts can have on the rate, we again analyzed the number of cases paid in full. Over the entire period of our study, from 1997 through August 2002, CFTC had collected the full amount in slightly more than 50 percent of the cases it levied. Although CFTC’s collection rates over the entire period of our study were relatively low, the agency was actively pursuing collections on all its uncollected cases, primarily through the Departments of Treasury and Justice. CFTC’s Chief of Cooperative Enforcement told us that the agency would continue to levy large fines when appropriate, even though large uncollectible amounts could reduce the agency’s collection rate. He said that levying fines that are commensurate with the related wrongdoing sends a message to the public that CFTC is serious about enforcing its statutes. The collection rates for the nine securities and futures SROs were comparable in both sets of calculations (see table 2). When we included open cases in our calculations, these SROs’ collection rates decreased slightly, with all but two (NASD’s and the New York Mercantile Exchange’s) declining between 1 and 9 percentage points. One reason for the relatively small decline was that these SROs generally had fewer and smaller uncollected fines, suggesting that they had been more successful in collecting on all cases than SEC and CFTC. According to an NFA official, one reason that the SROs that operate markets had higher collection rates was that in their role as exchanges they could sell a member’s “seat,” or membership, to pay off the fine, giving members an incentive to pay their fines. Because other regulators do not have this type of leverage, their rates are typically lower. NASD’s collection rate for closed cases was 95 percent and its rate for open and closed cases was 66 percent—a change of 29 percentage points. NASD’s rate for open and closed cases was affected by low collections in 1997 and 1998. As a result, the rates did not necessarily reflect the effects of the changes NASD made to its fine imposition practices in October 1999. As indicated in figure 3, NASD’s annual collection rates generally increased from January 1997 through December 2002. In 1997, NASD collected 26 percent of the total dollars invoiced. In 2002, it collected 96 percent—a 70 percentage point increase over 6 years. As we reported earlier, one of the primary reasons for the increases was a change in the way NASD imposes fines. Specifically, NASD stopped routinely assessing fines when barring an individual from the industry, reducing the number of fines it invoiced each year and improving its overall collection rate. Also, in calculating its rate, NASD excluded about $137 million in fines that would be due and payable only if the fined individuals were to reenter the securities industry. The New York Mercantile Exchange’s collection rate for open and closed cases was 83 percent—a decline of 17 percentage points from its closed case rate. When we excluded one uncollected $200,000 fine, the collection rate for open and closed cases declined by only 4 percentage points. Collection rates are the most widely available—and in some cases the only—measure of regulators’ success in collecting fines for violations of securities and futures laws. But external factors over which regulators have no control can skew these rates. Nonetheless, examining the rates and the factors influencing them can be a starting point for obtaining an understanding of regulators’ performance and changes to it. Also, in exploring these rates regulators can identify cases that account for a significant share of uncollected debts and decide whether continuing with collection efforts for these cases is worthwhile. Primary among the external factors affecting collection rates are the large fines and payments that we have been discussing. Just one or two extremely large uncollected fines can lower a collection rate significantly. Similarly, one or two large payments on such fines can raise a collection rate. Other external factors that can influence collection rates include violators’ ability to pay and the size of the fines themselves. For example, an SEC official said that some violators who have been barred from the industry cannot pay their fines because their earning capacity has been limited. In discussing CFTC’s relatively low collection rate, an agency official told us that the courts, in an attempt to match the gravity of the sanction to the offense, have sometimes imposed fines that are more than what an agency might realistically be able to collect. This official said that in one case, a court fined a company $90 million—triple the monetary gain from its illegal activities. He also said that in another case, a court assessed fines totaling $4 million against four violators, although CFTC had sought $660,000. Since our last report, SEC and CFTC have made material improvements to their policies and procedures for collecting delinquent fines that, if followed, should improve collections on debts owed to the federal government. Nonetheless, SEC lacks a formal strategy for collecting on its pre-guidelines delinquent debt. Although the probability of collecting monies ordered on older cases diminishes over time, some portion of these pre-guidelines cases may have collection potential that is being overlooked. Developing a formal strategy that prioritizes pre-guidelines cases based on their collection potential and establishes time frames for their referral to FMS and TOP would improve the likelihood of collecting some portion of the debt associated with these cases, which could be more than $1 billion. The success of SEC’s efforts to collect this debt will be closely related to the timely replacement of DPTS. Phase one of SEC’s action plan includes a tentative deadline for replacing DPTS by the end of fiscal year 2003. At that time, SEC will be able to identify all cases eligible for referral to FMS and TOP and develop a strategy for making these referrals. SEC has not yet set a milestone for completing the requirements analysis for phase two of its action plan or established a date to fully implement the computer system that will integrate SEC’s now separate databases. We are concerned that, without target dates, progress in implementing phase two could be slowed, affecting SEC’s ability to more efficiently address all cases that should be referred to FMS and TOP. Further, SEC’s progress has been slow in the 5 years since we recommended that the agency analyze industrywide information on SRO disciplinary program sanctions, in part because technological problems have hindered its ability to collect sufficient data to perform the analyses. SEC has not yet completed its first analysis and has no schedule for implementing the new disciplinary database intended to replace its current database. Finally, while controls were in place that should keep barred individuals from being readmitted to the securities and futures industries, neither the related statutes, SEC, or CFTC require the SROs to ensure that the fingerprints sent to the FBI for use in criminal history checks belong to the applicants who submit them. In the absence of such a requirement, the SROs lacked related controls that could help prevent inappropriate admissions to the securities and futures industries. SRO involvement in weighing alternatives for addressing fingerprinting requirements for the securities and futures industries would ensure that concerns about cost- effective solutions are appropriately considered and addressed. We recommend that the SEC Chairman develop a formal strategy for referring pre-guidelines cases to FMS and TOP that prioritizes cases based on collectibility and establishes implementation time frames; take the necessary steps to implement the action plan to replace DPTS by (1) meeting the fiscal year 2003 milestone for implementing phase one of the plan, (2) setting a milestone for completing the requirements analysis for phase two of the plan, and (3) establishing and meeting the implementation date for phase two; and analyze the data that have been collected on the SROs’ disciplinary programs, address any findings that result, and establish a time frame for implementing the new disciplinary database that is to replace the current database. We also recommend that SEC and CFTC work together and with the securities and futures SROs to address weaknesses in controls over fingerprinting procedures that could allow inappropriate persons to be admitted to the securities and futures industries. We requested comments on a draft of this report from the Chairmen, or their designees, of SEC and CFTC. SEC officials provided written comments, which are reprinted in appendix II. CFTC provided oral comments. In general, both agencies agreed with the facts we presented and also agreed to implement the recommendations we made. SEC emphasized that it expected to meet its milestone for implementing a replacement database for DPTS by the end of fiscal year 2003 and said that once the new system was in place, the agency would be able to identify delinquent debts that had not been referred to FMS and TOP and set deadlines for making referrals. While SEC said that further milestones for phase two of its action plan will be set at some time in the future, it made no reference to establishing a time frame for implementing its new disciplinary database. We believe that SEC needs to move quickly to set time frames for both of these projects, because in the absence of dates on which to focus, progress may be delayed. SEC also said that agency staff will contact CFTC to review the possibility of adopting new industrywide fingerprinting standards, including procedures to verify the identities of all individuals who are being fingerprinted. CFTC officials told us that they would work with SEC and the SROs to address our recommendation. Finally, we also received technical comments from SEC and CFTC that we incorporated into the report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs and its Subcommittee on Securities and Investment; the Chairman, House Committee on Energy and Commerce; the Chairman, House Committee on Financial Services and its Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises; and other interested congressional committees. We will send copies to the Chairman of SEC, the Chairman of CFTC, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site http://www.gao.gov. If you have any further questions, please call me at (202) 512-8678, [email protected], or Cecile Trop at (312) 220-7705, [email protected]. Additional GAO contacts and staff acknowledgments are listed in appendix V. To evaluate SEC’s and CFTC’s actions to improve their collection programs, we assessed their responses to our 2001 recommendations that (1) SEC take steps to ensure that regulations allowing SEC fines to be submitted to TOP are adopted; (2) SEC continue to work with FMS to ensure that compromise offers presented by FMS are approved in a timely manner; and (3) CFTC take steps to ensure that delinquent fines are referred promptly to FMS, including creating formal procedures that address both sending debts to FMS within the required time frames and requiring all of the necessary information from the Division of Enforcement on these debts. To assess steps SEC took to ensure that regulations allowing SEC fines to be submitted to TOP were adopted, we reviewed SEC’s final regulations and related procedures and collection guidelines. To determine compliance with the new collection guidelines for referring delinquent cases to TOP, we selected a judgmental sample of 66 post-guidelines fines and disgorgement cases using DPTS and obtained information from SEC on the referral status of those cases. Of the 66 cases, four were eligible for referral at the time of our review. We selected cases where judgments or orders were entered after SEC’s guidelines took effect, because staff told us they were tracking the referral of those cases. To determine the number, dollar amount owing, and age of the delinquent cases at the agency, we identified all cases with ongoing collections, using DPTS data as of January 31, 2003, and calculated the age from the judgment date (which in the absence of better data, we used as a rough proxy for the delinquency date) to January 31, 2003. Since DPTS was unreliable, the aging analysis provides only a rough estimate of the total number and age of cases. We interviewed SEC and FMS officials to obtain their views on SEC’s progress in referring cases to FMS and TOP and information on any impediments to this progress. To assess SEC’s efforts to continue to work with FMS to ensure that compromise offers presented by FMS are approved in a timely manner, we examined SEC’s procedures for processing compromise offers. We obtained data from SEC on the four compromise offers FMS submitted to SEC between July 1, 2001, and April 22, 2003, and analyzed the length of time it took for SEC to respond to the compromise offers. We obtained and used FMS’s data to validate SEC’s response time. We also interviewed SEC and FMS officials to discuss SEC’s policies, procedures, and controls and to obtain information on the agencies’ efforts to work together to ensure the timely approval of offers. We also obtained FMS’s views on SEC’s progress in responding to offers. To assess steps CFTC took to ensure that delinquent fines are promptly referred to FMS, we reviewed CFTC’s collection procedures, which it calls instructions, to ensure that they included time frames for referring cases to FMS and provisions for obtaining all necessary enforcement information. We also reviewed related agency controls. To assess staff’s compliance with the revised procedures, we obtained data from CFTC on its only four delinquent cases and analyzed the length of time it took to refer them to FMS. We obtained and used FMS’s data to validate that all of CFTC’s cases have been transferred within 180 days. We also interviewed CFTC officials to discuss the agency’s procedures and controls and obtained FMS’s views on CFTC’s progress in referring fines. To assess SEC’s and CFTC’s efforts to enhance their oversight of the SROs’ sanctioning practices, we assessed their responses to our 1998 and 2001 recommendations that (1) SEC analyze industrywide information on disciplinary program sanctions, particularly fines, to identify possible disparities among the SROs and find ways to improve the SROs’ programs; and (2) SEC and CFTC periodically assess the pattern of readmission applications to ensure that the changes in NASD’s and NFA’s fine imposition practices do not result in any unintended consequences, such as inappropriate readmissions. To assess the status of SEC’s efforts to analyze industrywide information on SROs’ disciplinary program sanctions, we interviewed SEC officials to discuss the types of analyses planned, any obstacles encountered, and efforts to overcome those obstacles. To assess both SEC’s and CFTC’s efforts to periodically assess the pattern of readmission applications, we interviewed officials of these agencies to determine the number of readmission applications from barred individuals and reviewed documentation that described the controls used to keep barred applicants from reapplying. We focused our review on permanent bars and application records since NASD and NFA changed their fine imposition practices in October 1999 and December 1998, respectively. To validate both agencies’ statements that they had not reviewed any readmission applications from barred individuals since our 2001 report, we obtained the names of barred individuals from NASD and NFA and verified that each individual had not applied for readmission. Specifically, for NASD, we compared the names of over 900 barred applicants who had not been fined against a list of readmission applications. We focused on these individuals because of concerns that individuals who had been barred and not fined might be more willing to seek readmission than those who had been barred and fined. For NFA, we researched the histories of 32 barred individuals, using NFA’s database to validate that none of the individuals had applied for readmission. We examined all barred applicants, including both those who had been fined and those who had not been, because the data did not allow us to distinguish between these groups. To ensure that NFA’s and NASD’s data were sound, we interviewed agency officials to assess the controls these agencies had over their data systems, such as their processes for entering and updating data, safeguards for protecting the data against unauthorized changes, and any tests conducted to verify the accuracy and completeness of the data. We found that the data were useable for our purposes. To address concerns that surfaced during our review about controls over the fingerprinting procedures used in criminal history checks, we interviewed officials at NASD, NFA, NYSE, and the FBI and reviewed laws and regulations related to fingerprinting. In addition to NYSE, other SROs that operate markets have agreements with the FBI under which they may submit fingerprints to the FBI for criminal history checks. We limited our review to NYSE because it is the largest SRO that operates a market, and we wanted to determine how another SRO’s procedures might differ from those of NASD and NFA. To calculate the fines collection rates for SEC, CFTC, and nine securities and futures SROs for 1997 through 2002 (all years were calendar years), we focused on these regulators’ imposition and collection of fines through their enforcement and disciplinary programs. The nine SROs included the American Stock Exchange, the Chicago Board Options Exchange, the Chicago Board of Trade, the Chicago Mercantile Exchange, the Chicago Stock Exchange, NASD, NFA, the New York Mercantile Exchange, and NYSE. We excluded fines for minor rule infringements such as floor conduct, decorum, and record-keeping violations that normally do not undergo disciplinary proceedings. The exchanges generally referred to these violations as “traffic ticket” violations, and they are handled through summary proceedings and involve smaller fine amounts. We excluded amounts owed for disgorgement and restitution, except for NASD, because these sanctions are different from fines in that they are imposed to return illegally made profits or to restore funds illegally taken from investors. Due to the way NASD tracked its fines and payments, NASD was unable to exclude disgorgement amounts from its payment data. We also excluded fines that were not invoiced, because they would not be due unless the fined individual sought to reenter the securities industry. All other fines were factored into the rate, including fines dismissed in bankruptcy, to obtain the most complete view possible of the regulators’ efforts to discipline violators. To calculate annual fines collection rates and composite collection rates, we obtained and analyzed data from SEC, CFTC, and all SROs, except NASD, on fines levied from January 1997 through August 2002, and collected through December 2002. NASD’s data include fines invoiced from 1997 through 2002. We limited our review to fines levied through August 2002 to allow regulators through December 2002 (4 months) to attempt collections. We calculated the collection rate in two ways. First, we calculated the rate by including only closed cases—that is, cases with a final judgment order for which all collection actions were completed. This approach is consistent with the one used in our 1998 report. Second, to provide a more complete view of regulators’ collection activities, we calculated the rate using all closed and open cases—that is, cases with a final judgment order for which collections actions were completed and cases with a final judgment order that remained open while collection efforts continued. For cases with a payment plan, we adjusted the levy amount to the amount owed as of December 31, 2002, because a portion of the original levied amount was not yet due. We could not do this for SEC or NASD because agency data did not specify the amount owed as of December 31, 2002. As a result, SEC’s and NASD’s rate may be understated. We also used NASD’s calculations of its collection rates, because the design of NASD’s financial system did not allow us to calculate these rates with an acceptable degree of accuracy using the approach we applied to other SROs. First, according to NASD officials, NASD’s calculations used the date a fine was invoiced instead of the date it was levied. Fines were typically invoiced between 15 and 45 days after they were levied. This difference may have had a minor effect, particularly on the annual collection rates. Second, NASD’s collection rates represent the total amount collected up to December 31, 2002, on fines invoiced from January 1997 through December 2002 (as opposed to the August 31, 2002, date for the other SROs). Third, because NASD’s system could not identify cases on a payment plan, NASD’s calculations do not adjust the fine amount to the amount owing as of December 31, 2002, exerting a slight bias toward understating the collection rate. Fourth, NASD’s collection rates (1) include disgorgement because NASD was not able to separate such amounts from its payment data and (2) exclude fines that were levied but not invoiced because such fines were not due unless the fined individual sought to reenter the securities industry. We also assessed the reliability of the data provided by the 11 regulators by asking officials about agency controls for collecting fines and payment data, supervising data entry, safeguarding the data from unauthorized changes, and processing that data. We also asked whether they performed data verification and testing. Although the controls varied across the agencies, each one demonstrated a basic level of system and application controls. We also performed basic tests of the integrity of the data we received from some of the regulators that provided us with individual fines data. We concluded that the data from all of the organizations, except SEC, was sufficiently reliable for the purposes of this report. The number of errors we and SEC found in DPTS during the course of our work and the findings of the January 3, 2003, report to the SEC Inspector General that the data in DPTS were incomplete and inaccurate led us to conclude that DPTS fines data remain insufficiently reliable to calculate an accurate collection rate. While we cannot be sure of the magnitude or direction of the errors in the DPTS fines data, we are nevertheless reporting the number and dollar value of cases eligible for referral to FMS and TOP, the age of this debt, and SEC collection rates as the best estimates possible at this time. We did our work in accordance with generally accepted government auditing standards between August 15, 2002, and July 1, 2003. We performed our work in Boston, Mass.; Chicago, Ill.; New York, N.Y.; and Washington, D.C. We calculated the collection rates using data from SEC and the SROs, except for NASD, which calculated its own rates (see appendix I for further details). The rates are based on fines levied from January 1997 through August 2002 and include all amounts collected on those fines through December 2002, except for NASD. The fines data listed for each year represent collection activity on the fines levied in each of those years. Percentages were rounded to the nearest whole number. We calculated the collection rates using data from CFTC and the SROs. The rates are based on fines levied from January 1997 through August 2002 and include all amounts collected on those fines through December 2002. The fines data listed for each year represent collection activity on the fines levied in each of those years. Percentages were rounded to the nearest whole number. In addition to those named above, Emily Chalmers, Marc Molino, Carl Ramirez, Jerome Sandau, Michele Tong, Sindy Udell, and Anita Zagraniczny made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | Collecting fines ordered for violations of securities and futures laws helps ensure that violators are held accountable for their offenses and may also deter future violations. The requesters asked GAO to evaluate the actions the Securities and Exchange Commission (SEC) and Commodity Futures Trading Commission (CFTC) have taken to address earlier recommendations for improving their collection programs. The committees also asked GAO to update the fines collection rates from previous reports. SEC and CFTC have improved their collection programs since GAO issued its 2001 fines report. While it was too early to fully assess the effectiveness of their actions, SEC could be doing more to maximize its use of Treasury's collection services. SEC has implemented regulations, procedures, collections guidelines, and controls for using the Treasury Offset Program (TOP), which applies payments the federal government owes to debtors to their outstanding debts. However, SEC has been focusing on referring to TOP those delinquent cases with amounts levied after its new collections guidelines went into effect. The agency has not developed a formal strategy for referring older cases, reducing the likelihood of collecting monies on what could be more than a billion dollars of delinquent debt. Further impeding collection efforts, SEC does not have a reliable system for tracking monies owed on these older cases and therefore could not determine which cases were not being referred to TOP. SEC has drafted an action plan for a new system to track all cases with a monetary judgment. Once the system is in place, the agency should have a tool for identifying all cases, including older delinquent cases that can be referred to TOP. However, SEC has not established a time frame for fully implementing the plan. GAO's calculations for closed cases (collection actions completed) showed that regulators' collection rates on fines imposed between 1997 and August 2002 equaled or exceeded those from 1992 to 1996. Recalculating the rates to include closed and open cases (collection actions ongoing) affected SEC's and CFTC's collection rates, primarily because of a few large uncollected fines. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Creation of the USEAC network can be best understood in the context of sweeping efforts made during this decade to strengthen federal delivery of export promotion services. During 1991-93, we conducted a number of reviews of federal export promotion activities. We then reported on a governmentwide effort that cost over $2.7 billion and that was fragmented among several agencies with no overarching strategy or explicit set of priorities. Among our specific findings, we reported that U.S. firms seeking export assistance were likely to become confused and discouraged by the multiple networks of domestic offices maintained by federal agencies for delivering export services. Partially in response to our work, Congress enacted the Export Enhancement Act of 1992 (Public Law 102-429, Oct. 21, 1992), which created in statute the interagency Trade Promotion Coordinating Committee (TPCC) and tasked it with developing a strategic plan for strengthening federal export promotion services. This legislation also directed the U.S. & Foreign Commercial Service—the Commerce Department agency responsible for managing its domestic field network—to utilize its district offices as “one-stop-shops.” These shops would be able to (1) provide exporters with information on all U.S. government export promotion and export finance services, (2) assist exporters in identifying which federal programs may be of greatest assistance, and (3) help exporters make contact with those federal programs. TPCC, on September 30, 1993, issued its first National Export Strategy report, which contained 65 recommendations for federal action to help U.S. exporters. Among these, the strategy recommended the creation of “one-stop shops” that would integrate primarily representatives of the Department of Commerce and SBA—two federal agencies with extensive export promotion field networks—and Eximbank. It further recommended that the agencies establish four pilot “one-stop shops” in Baltimore, Chicago, Los Angeles, and Miami. As envisioned by the strategy, these “one-stop shops” would exceed the minimum requirements of the 1992 Export Enhancement Act in that they would actually contain the staff of the three agencies rather than simply have Commerce staff provide information about these and other agencies’ export programs. In commenting on the National Export Strategy, we presented our views on the process for creating the network of “one-stop shops.” We stated that, before establishing an expansive network, the participating agencies should first evaluate the results of the four pilots to determine whether providing the full range of export promotion services in an integrated way can increase the value to the business community of federal export promotion assistance. We further stated that the aim of the USEAC network should not simply be to co-locate or even coordinate, but “to integrate and make more accessible a range of export services aimed at small- to medium-sized export-ready firms.” (Emphasis added.) With Commerce taking the lead, the three agencies by January 1994 had established the four pilot “one-stop shops”—now called U.S. Export Assistance Centers. Although the TPCC’s export strategy stated that these USEACs would go through a rigorous evaluation process, Commerce and its partner agencies decided to move forward with expanding the network before such evaluations could be concluded. By late February 1996, the three agencies had expanded the network to 14 USEACs, along with 10 District Export Assistance Centers (DEAC), which have only Commerce staff and are connected to the USEACs in a hub-and-spoke system. Commerce and its partner agencies opened four additional DEACs by June 1996 and have plans to further expand the network in the future. USEAC staff and customers, and officials of nonfederal partner organizations told us that, because of the USEACs, U.S. firms are more knowledgeable about and have access to a broader range of federal and nonfederal export services. Customers, however, also indicated that USEACs can improve the delivery of those services to the U.S. export community. Approximately 63 percent of the USEAC staff responding to our survey said that establishment of the USEACs had increased the overall quality of export services to a great or very great extent. About 80 percent of our respondents stated that the USEACs had, in particular, substantially increased customer access to the full range of federal export promotion services to a great or very great extent. Eighty-two percent of the survey respondents also cited significant increased cooperation among the staffs of the three participating agencies, which we believe would help to expand the availability of federal export services as USEAC staff work collaboratively or refer clients to partner agencies. During our visits to the four USEACs, we learned of some specific examples of USEAC staff taking the initiative to enhance the value of their services to exporters by working closely with federal and nonfederal partner organizations. These examples demonstrate the potential benefits that can be derived from creation of the USEACs. At the Baltimore USEAC, the Commerce staff made an effort, as part of their counseling activities, to generate clients for the Maryland Industrial Development Financing Authority, a state agency that provides export financing. At the Long Beach, California, USEAC, the director introduced the “Export-Trade Assistance Partnership” program, which sought to utilize the skills and knowledge of federal and nonfederal partner organizations to increase the export know-how of firms that are not yet ready to export. At the Chicago and Miami USEACs, the Eximbank and SBA staffs closely coordinated their outreach efforts. These individuals were familiar with the financing services of both agencies and referred clients when appropriate. We surveyed the four USEACs’ 60 “best customers” (15 for each USEAC) who had received services from more than one USEAC agency, as identified by the USEAC directors. Of the 40 “best customers” who responded to our survey, a majority stated that they were very satisfied with the export services provided by the USEACs. Their satisfaction was based on such factors as timeliness, staff knowledge, and usefulness of the services obtained. However, the customers responding to our survey also saw room for improvement in USEAC agency efforts to work as a unit in the delivery of services. Of the 28 customers who acknowledged receiving services from a second USEAC agency, 11 (40 percent) indicated that they had found the second agency by themselves, rather than through their USEAC contact. We also found that, of the 17 customers who did acknowledge receiving services from the second agency as a result of their USEAC contact, 12 stated that they had received useful services from more than one government agency. Several of the customers we interviewed told us that the USEAC staff member(s) they regularly worked with did not inform them of the full range of services provided by the USEACs, even after they had expressed a need for the services of another USEAC agency. The decision by Commerce and its partner agencies to co-locate staff (rather than just meet the minimum requirements of the 1992 Export Enhancement Act) presented an opportunity to substantially improve the delivery of federal export promotion services. On the basis of our site visits, surveys, and discussions with USEAC staff, customers, and nonfederal partners, we identified certain basic interagency mechanisms that, if established, could better ensure an improved delivery of services. Despite the increased cooperation among agency staffs, we found during our interviews with USEAC staff that they did not consistently work as a team. For example, we learned that individuals at certain USEACs were reluctant to recommend the services of another agency, even to clients who expressed a need, because they were unfamiliar with that agency’s performance in delivering the service. To better promote teamwork, USEAC directors told us they needed authority to contribute to USEAC staff appraisals with regard to intra-USEAC teamwork. To do this, the agencies would need to include on their appraisals a performance factor on intra-USEAC teamwork and develop relevant performance measures. These performance measures could specify, for instance, the number of referrals among USEAC staffs and, possibly, how often such referrals led to export promotion or financing services. Currently, each agency appraises its own staff. The appraisal forms for Commerce and SBA staff contain at least one factor directly relating to intra-USEAC teamwork. Commerce officials informed us that the agency informally permits USEAC directors (i.e., those who are not Commerce employees) to contribute to appraisals of Commerce staff with regard to several USEAC-related factors. SBA officials informed us that the agency has formally given USEAC directors (i.e., those who are not SBA employees) authority to contribute to appraisals of SBA staff with regard to one USEAC-related factor. The Eximbank has this issue under consideration as part of a major restructuring of the agency’s performance appraisal system. To further improve the quality of services to customers, USEAC directors and staff acknowledged their need for a USEAC-wide, computer-based client tracking system. With such a system, USEAC staff would be able to readily obtain information that another agency might have on a potential client or determine whether it has already received services from another USEAC agency. We believe that having this ability would help ensure that USEAC staff do not suggest inappropriate services or make duplicate requests for information. Such a system could also serve as a source for identifying potential clients to pursue in marketing export services. At the time of our visits, the agencies at each of the four USEACs used a separate client tracking system. Commerce staff used the agency’s “Commercial Information Management System”—a worldwide data base that links Commerce headquarters, its domestic field network, and overseas offices. Eximbank staff used an off-the-shelf computer program for maintaining information on customers. SBA staff used mostly paper filing systems but sometimes employed the Commerce or Eximbank data bases. Commerce had offered to make its system available to all USEAC staff but staff we spoke with did not support such a move. They generally characterized the Commerce system as slow, cumbersome, and otherwise not able to meet their needs. Some also expressed concern that placing proprietary business information on a worldwide data base could compromise its confidentiality. Although Commerce staff are required to use this system, we found that they have done so to widely varying degrees. These ranged from using it as a true client data base, with detailed information on each customer and Commerce services received, to using it as nothing other than a list of contacts. Commerce, Eximbank, and SBA officials recently told us that they see the development of a client tracking system as a high priority for the USEACs. They plan to install at all the USEACs an off-the-shelf client tracking system that is currently under development. Some USEAC directors also saw the need for (1) adequate authority over USEAC expenditures and (2) a USEAC-wide accounting system that would permit USEACs to accurately identify and allocate costs and better manage expenditures. With regard to the former, our review indicated that USEAC directors did not have authority to make routine expenditures for such things as printing marketing brochures, using temporary employees to fill in for staff on long-term leave, or buying copiers or other office equipment. USEAC directors and staff told us that, to make purchases, they currently must use Commerce’s procurement approval process. They characterized this process as being very lengthy and time-consuming, due largely to paperwork requirements and multiple layers of review. USEAC staff told us that they saw themselves devoting too much time to these purchases, which often were made long after the need arose. With regard to the need for a USEAC-wide accounting system, USEAC directors told us that they could not identify the costs associated with creating and maintaining the USEACs and allocate these costs among the three participating agencies. They told us that, if they had an adequate system, they could also better assess the relative cost-effectiveness of various tools used by USEACs to reach and deliver export services to U.S. firms. For example, USEAC directors may use a variety of ways to market their services, including mailings to exporters, participation in trade events and export shows, and/or through making cold telephone calls to exporters. Knowing the relative cost of these activities, as well as the results, would help in determining which of these (either singly or in combination) is most cost-effective. Currently the USEACs do not have such information. Further, we learned that under memorandums of understanding negotiated by the three agencies, Commerce’s International Trade Administration (ITA) was to cover all USEAC-related expenditures, allocate them among the participating agencies, and seek reimbursement. Eximbank and SBA officials told us that Commerce had been unable to allocate USEAC-related costs among the three agencies and, as a result, had not provided them with an adequate accounting of USEAC costs. Instead, ITA forwarded invoices for “USEAC expenses” that lacked detail. Commerce, Eximbank, and SBA officials recently told us that they have agreed to allocate expenses based on a formula that reflects the limited capabilities of ITA’s financial accounting system. This agreement is to be reflected in a revised memorandum of understanding, which has not yet been signed by all USEAC agencies. To obtain whatever financial data might be available on the USEAC network, we asked the three agencies to compile information on their USEAC-related expenditures. Commerce sought to get the information requested from the individual USEACs, who themselves had no common accounting mechanism to track costs. The Eximbank and SBA relied on centralized financial management systems for the requested information. The data Commerce officials provided to us was heavily qualified and could not be reconciled with Eximbank and SBA data. Therefore, the actual cost of creating and maintaining the USEAC network was not known. The agencies recently told us that they are currently piloting a separate financial management system for the USEACs. They anticipate that this system will provide a more precise accounting of expenditures. Based on our review, we recommend that the Secretary of Commerce, working with the Chairman of the Eximbank and the Administrator of SBA give all USEAC directors the authority to contribute to the performance appraisals of all USEAC staff with regard to intra-USEAC cooperation and teamwork (including development of an appropriate performance factor for staff appraisals and performance measures), establish a USEAC-specific customer tracking system that contains information on clients and services provided to them, and set up an accounting system that accurately tracks the full costs of creating and operating the USEAC network and, as part of that process, incorporate ways to give USEAC directors greater authority over USEAC expenditures. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or other Members of the Subcommittee may have. During our week-long visits to each of the four pilot USEACs (in Baltimore, Chicago, Long Beach, and Miami) in May-June 1995, we administered two survey instruments. One survey sought the views of USEAC staff and focused on various operational issues such as cooperation among USEAC agency staff (as well as with nonfederal partners) and the quality of services delivered. The other survey sought the views of USEAC customers and focused on a number of dimensions of program delivery such as access to export services, USEAC staff knowledge, and the timeliness and usefulness of the USEAC services obtained. We surveyed and interviewed the USEAC directors and every member of the staff that was available during the time of our visit. Individuals to be surveyed were determined jointly by the USEAC directors and our staff. The surveys were completed anonymously. In all, we received 44 replies, which represented a response rate of about 85 percent. Highlights of our survey results follow. The overwhelming majority of USEAC staff believed that the establishment of the USEAC had increased cooperation among the USEAC agencies (82 percent “to a great/very great extent”) and substantially increased customer access to federal export promotion services (80 percent). With respect to other factors, USEAC staff believed the USEACs had (1) improved the quality of services they personally deliver (63 percent), (2) increased export-ready customers’ ability to export (58 percent), and (3) improved cooperation with nonfederal partners (50 percent). USEAC staff rated their USEACs on progress toward integrating operations across several dimensions using a 10-point scale (with a score of 10 representing complete integration). They gave referrals an average integration score of 7.0 (out of a possible 10). Other dimensions were given a lower score, such as administrative resources (an average score of 4.5) and customer tracking systems (an average score of 4.2). Overall, USEAC staff gave high satisfaction ratings (e.g., “very” or “somewhat” satisfied) for various factors, including responsiveness of agencies to each others’ referrals (97 percent), accessibility of other USEAC agencies (93 percent), and quality of referrals from other USEAC agencies (85 percent). The officials were less satisfied with such factors as information-sharing with nonfederal partners (66 percent), the relationship between USEAC agency officials and the agency officials at local regional offices (56 percent), and the recognition they received for their efforts at promoting the USEACs (37 percent). In surveying the USEAC customers, we asked the USEAC directors to identify their 60 “best customers” (15 from each USEAC) who had received services from more than one USEAC agency. We surveyed all 15 clients at each USEAC and selected 5 clients to interview, based largely on availability and proximity to the USEAC. We received 40 survey responses (13 customers of the Baltimore USEAC, 11 from Chicago, 8 from Long Beach, and 8 from Miami) for a response rate of 67 percent. Of the 40 survey respondents, 12 indicated that they had not received a service from a second USEAC agency.Highlights of our survey results follow. The customers who replied to our survey expressed high levels of satisfaction with the individual agencies from which they had received services. For example, 83 to 93 percent of the respondents were satisfied or very satisfied with the timeliness, staff knowledge, and usefulness of the services provided by the first USEAC agency. Of the 28 customers who acknowledged receiving services from a second USEAC agency, 17 received services from the second agency as a result of the USEAC contact, and 12 of these stated that they had received useful services from more than one government agency. Customers gave the USEAC agencies high marks (92 percent generally high to very high) for projecting a business image and for providing follow-up. The USEAC agencies did not receive as high a mark for promoting their services (75 percent). Customer responses regarding USEAC agency referrals to another USEAC agency showed that referrals were not always made when services were desired. Of those customers who acknowledged receiving services from more than one USEAC agency, about 40 percent said that they had learned about the second agency themselves or through a non-USEAC source and had initiated the contact. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed opportunities to improve U.S. Export Assistance Centers' (USEAC) operations. GAO noted that: (1) staff and customers at the four USEAC surveyed believed that collocating agency staff and nonfederal partner organizations improved export delivery services by increasing customer access to federal export promotion services; (2) although customers were highly satisfied with individual agencies' services, they believed that cooperation among agency staffs could be improved; (3) 40 percent of the customers who used a second USEAC agency found the agency on their own without help from their USEAC contact, even though some of these customers expressed a need for another agency's services; (4) some USEAC staff were reluctant to recommend other agencies' services because they were not familiar with those agencies' performance in service delivery; (5) to improve teamwork, USEAC directors believed that they needed to have input to staff performance appraisals with regard to intra-USEAC teamwork, a USEAC-wide client tracking system, adequate authority over USEAC expenditures, and a USEAC-wide accounting system; (6) three federal agencies were considering ways to give USEAC directors input to staff appraisals and plan to install an off-the-shelf client tracking system; and (7) the three agencies have agreed to allocate USEAC expenses based on a formula that reflects the limited capabilities of the International Trade Administration's accounting system, but they are working on a separate financial management system for USEAC. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The GAO Cost Estimating and Assessment Guide explains how four characteristics of a high-quality, reliable cost estimate can be understood in relation to 12 leading practices. The extent to which an agency meets the leading practices underlying each characteristic determines its For example, we consider the performance for that characteristic.comprehensive characteristic to be “substantially met” if the organization substantially meets the underlying leading practices of (1) developing an estimating plan and (2) determining an estimating structure. Because the leading practices are separate and discrete, an agency’s performance in each of the characteristics can vary. For example, an organization’s cost estimating methodology could be found to be comprehensive and well documented, but not accurate or credible, resulting in the organization producing cost estimates of limited reliability. Table 1 illustrates the relationship of the 12 leading practices to the four characteristics of a high-quality, reliable estimate. AOC project development consists of three stages: planning, design, and construction. According to AOC’s guidance, AOC refines requirements and updates cost estimates as projects progress through these stages. AOC has guidance and requirements for cost estimates at each stage of a project’s development. In general, as projects develop over time and requirements are refined, the accuracy of cost estimates is expected to increase. Given the current fiscal environment and existing building conditions, AOC must prioritize projects in its capital program and decide to either request funding for a project or defer it while mitigating potential facility issues. For example, in fiscal year 2014, AOC requested almost $155 million for 17 projects that AOC deemed urgent, while deferring 46 projects estimated to cost about $172 million.most recognizable projects that it considers urgent are the Cannon Building renewal and the Capitol Dome restoration. The Cannon Building, completed in 1908, is the oldest congressional office building and is occupied by members of the House of Representatives and their staffs. (See fig. 1.) The building houses 142 office suites, 5 conference rooms, 4 hearing rooms, and the Caucus Room, which can accommodate large meetings. The building also provides space for a library, food service, and a health unit. According to AOC planning studies, the building is plagued with serious safety, health, environmental, and operational issues that are worsening. For example, without action, essential systems for heating and cooling that are located behind walls and in mechanical rooms will continue to deteriorate, potentially negatively affecting Members of Congress, staff, and constituents. AOC has been developing the scope of the Cannon Building’s renewal project since approximately 2004 when AOC’s consultant conducted a facility condition assessment (FCA) that identified the building’s deficiencies. In 2009, we reviewed AOC’s progress in developing the Cannon Building project and found that while it had followed a reasonable process to plan the building’s renewal such as through updating the FCA, it was important that AOC continue, as planned, to refine the project’s scope and cost estimate.review, AOC has proceeded with the design phase of the project, based Since our 2009 upon its budget of $753 million for the planning, design, and construction phases. The project is currently expected to be completed by 2025. According to AOC officials and current design documents, AOC plans to correct most of the Cannon Building’s identified deficiencies and to address requirements such as energy conservation, physical security, hazardous materials abatement, and historic preservation. The project is to involve substantial reconfiguration of interior and exterior spaces to include reconstructing the building’s top floor, which now partially consists of storage space, and landscaping the courtyard. The project is also expected to provide refurbished windows and a new roof. Additional work is intended to preserve and repair the building’s stone exterior. The project is also expected to allow for complete replacement of all plumbing, heating and cooling, fire protection, electrical, and alarm systems; refurbish restrooms and make them more accessible to people with disabilities; and provide new wall and floor finishes in some areas. In addition, the project includes removing asbestos that may be contained in plaster ceilings and walls. AOC plans to conduct the work in phases corresponding to the four sections of the building and including an initial phase for utility work as shown in figure 2. Tenants displaced during construction of each section are to move to temporary offices while other occupants will remain in building sections not affected by construction. The U.S. Capitol Dome, an important symbol of American democracy and an architectural icon, was constructed of cast iron more than 150 years ago. According to AOC, the dome has not undergone a complete restoration since 1960, and due to age and weather is now plagued by more than 1,000 cracks and deficiencies that are causing it to deteriorate. Figure 3 shows cracks at the exterior column base and deteriorating interior paint. The project is intended to stop deterioration in the dome’s cast iron structure as well as to ensure the protection of the interior of the dome and rotunda. In the 1960 restoration, the dome was stripped of its paint so the ironwork could be repaired, primed with a rust inhibitor, and then repainted. As part of the current project, AOC is undertaking similar restorative work to include removing old paint, repairing the cast iron, and repainting. The project has proceeded in phases as shown in table 2, with phase IIA restoration work currently in progress. In earlier phases, AOC completed interim painting, revalidated the project’s design, and restored the base— or skirt—of the dome. To complete the project, AOC is seeking appropriations of about $20 million for construction and other costs for phase IIC work in fiscal year 2015. As of February 2014, AOC estimated the total cost of the Capitol Dome restoration project at about $125 million. As previously discussed, the GAO Cost Estimating and Assessment Guide defines 12 leading practices related to four characteristics— comprehensive, well documented, accurate, and credible—that are important to developing high-quality, reliable estimates. Our analysis determined how well AOC met a characteristic based on our assessment of AOC’s conformance to the leading practices related to that characteristic. We discuss characteristics using five rating categories— does not meet, minimally meets, partially meets, substantially meets, or fully meets. As shown in table 3, our analysis found that AOC’s cost- estimating policies and guidance contribute to estimates that are comprehensive and well documented in that AOC fully meets most of the tasks that underlie the leading practices associated with these two characteristics. We found that AOC’s cost-estimating policies and guidance partially met the accuracy characteristic and minimally met the credible characteristic based on their conformance to the leading practices associated with those characteristics. Table 3 also contains key examples of our rationale for our assessment of each leading practice and characteristic. Characteristic Leading Practice Comprehensive Develop the estimating plan AOC has a formal process that develops the estimating plan, Key examples of rationale for assessment including describing responsible parties and defining specific tasks. The estimating structure follows a work breakdown structure (WBS) AOC uses different types of cost estimates and has defined when each applies and what each entails. AOC has formally defined program characteristics. AOC points to supporting documents that, when developed, should contain ground rules and assumptions specific to a given project. AOC guidance is specific about sources of data. These include commercially available construction cost databases as well as market and industry knowledge, staff knowledge, and historical data. In reference to the leading practices underlying the characteristics of a comprehensive cost estimate, we found that AOC has a formal process for developing estimating plans, and follows an estimating approach incorporating a work breakdown structure (WBS) that is widely used in the construction industry. In developing estimating plans, we found that AOC’s policies and guidance that describe the project-planning process provide the framework for meeting this leading practice. In the planning stage, AOC project managers establish project scope and outline roles and responsibilities for planning, design, and construction stages. To facilitate these efforts, AOC project managers are to use particular documents. One document, the Project Development Form, provides a basis for preparing requirements studies that help to define scope and estimate costs. AOC then is to use this information in developing Project Management Plans that are to describe other key components of project delivery. These components include defining the strategy for providing design and construction services, identifying project team members, establishing a communications plan, and setting project controls such for managing project changes. AOC also considers cost reporting as a component of the project delivery process. For example, AOC typically requires its design contractors to provide cost reports at the same time incremental design submittals are made. Because AOC’s project development process accounts for cost reporting, we determined that it satisfies the intent of the leading practice encouraging use of a formal process for developing estimating plans. With regard to leading practices related to producing well-documented estimates, we found that AOC requires formal definitions of programprogram’s purpose; characteristics, such as the develops ground rules and assumptions from supporting documents; requires the use of cost data sources that are specific to the construction industry (i.e., obtain the data); requires documenting the estimate and supporting documents; and requires management approval of the cost estimate. AOC fully meets the requirements of most of these leading practices. For example, AOC’s policies and guidance for the project development process provide the structure to enable an adequate understanding of program characteristics—such as key design features, technical definitions, and the acquisition strategy—that will comprise the cost estimate. In addition, AOC guidance establishes requirements for use of industry-accepted cost-data sources and allows for application of staff knowledge and historical data in developing estimates. AOC guidance also provides for estimates to be documented to show important parameters, assumptions, descriptions, methods, and calculations used to derive the estimate. However, while AOC requires management approval of its cost estimates, it falls short of fully meeting the requirements of this leading practice. In particular, AOC’s briefings do not include the level of information defined by our leading practice, particularly information related to risks associated with the underlying data and methods. This is due, in part, to AOC’s policies and guidance not requiring a risk and uncertainty analysis in developing estimates. We discuss this leading practice in the following section of this report. We found that AOC policies and guidance do not require enough detail to suggest that resulting estimates would be accurate and credible. In reference to accuracy, we found that AOC’s policies and guidance partially meet the respective underlying leading practices pertaining to this characteristic. According to AOC’s policies and guidance, AOC is to develop estimates as projects progress through planning and design phases that are based on sufficiently detailed documentation of construction requirements. AOC is to then use these estimates to support budget requests for construction funding. However, AOC’s guidance does not require that cost estimates be updated with actual costs during a project’s construction phase—a leading practice. According to the Cost Guide, updating estimates to reflect actual costs as the project progresses allows agencies to review variances between planned and actual costs and provides insight as to how the project changed over time. In reference to credibility, we found that AOC’s policies and guidance minimally meet the underlying leading practices pertaining to this characteristic. For example, we found that AOC’s guidance does not require following all steps for conducting a “sensitivity analysis”determining the estimate’s reasonableness, and conducting a risk and uncertainty analysis. While AOC guidance does discuss conducting some sensitivity analysis, it skips many associated tasks of leading practices we identified, such as identifying key cost drivers, ground rules and assumptions for sensitivity testing, and evaluating the results to determine which drivers most affect the cost estimate. Similarly, while AOC guidance provides that contingencies be added to estimates to account for risk and uncertainty, AOC’s guidance does not provide documented reasons explaining how the actual budgeted amounts for unforeseen costs were developed. AOC officials told us that while their policies and guidance do not require sensitivity or “quantitative-risk and uncertainty analyses,” they perform such assessments qualitatively to establish budget contingency. However, the leading practice to determine whether a program is realistically budgeted is to perform a quantitative-risk and uncertainty analysis, so that the probability associated with achieving its point estimate can be determined. The results of a quantitative-risk and uncertainty analysis—the range of costs around a point estimate—can be useful to decision makers because it conveys the confidence level in achieving the most likely cost and informs them about cost, schedule, and technical risks. Not having an understanding of an estimate’s confidence level limits AOC’s ability to determine the appropriate level of contingency that is needed to address risks and uncertainty for a particular project and could lead AOC to ineffectively allocate resources across competing projects if contingency levels are either overstated or understated. In addition, absent information on estimates’ confidence levels, Congress will not have critical information for making well-informed funding decisions. In addition, pertaining to the leading practice of developing a point estimate and comparing it to an independent cost estimate (ICE)—which affects both accurate and credible characteristics—we found that the independent estimates defined by AOC’s estimating process have a limited degree of independence and focus solely on proposed contractor costs rather than on the entire cost estimate, including both government and contractor efforts. According to our leading practices, an ICE, conducted by an organization outside the program office, provides an objective and unbiased assessment of whether the agency’s program estimate can be achieved. However, because AOC’s project management and cost-estimating functions are performed within the same organizational group, the estimates could potentially be influenced by AOC’s project managers and therefore could be susceptible to bias. Because of the weaknesses in AOC’s policies and guidance pertaining to two of the four characteristics that are important to developing high- quality, reliable estimates, the project cost estimates that AOC produces may not always be reliable. Without reliable cost estimates, AOC’s projects risk experiencing cost overruns or budget surpluses, missed deadlines, and performance shortfalls. Furthermore, potential limitations in the reliability of its estimates may impair Congress’s ability to make well-informed funding decisions and affect how AOC allocates resources across competing projects in its capital program. In comparing the Cannon Office Building’s renewal and Capitol Dome’s restoration cost estimates to GAO’s leading practices, we found strengths and weaknesses that generally correspond to our assessment of AOC’s overall policies and guidance for developing cost estimates. We initially determined that both estimates were comprehensive while lacking, to a varying extent, in their documentation, accuracy, and credibility. Following our initial determination, AOC provided further documentation that resulted in improvements to each estimate’s assessment. Our final assessment found that the Cannon Building renewal estimate to be substantially comprehensive, well documented, and accurate while lacking in elements affecting its credibility, and the Capitol Dome restoration estimate to be substantially comprehensive and well documented while lacking in areas pertaining to accuracy and credibility. In aggregate, because of weaknesses pertaining to characteristics that are important to the development of high-quality, reliable estimates, AOC’s cost estimates for the Cannon Building’s renewal and Capitol Dome’s restoration may not be fully reliable. Appendix III provides additional details on our comparison of these two cost estimates to our leading practices. Appendix III provides additional information on our assessment of these estimates compared to the characteristics of high-quality, reliable estimates. and allows for tracking of cost and schedule performance by defined elements of work. In addition, AOC conducted a life-cycle-cost analysis of the heating and cooling system alternatives it considered to include analyzing their energy consumption for the purpose of comparing operational costs. However, the estimate did not fully meet the characteristic as it did not include full life-cycle costs, from inception through design, construction, operation, and maintenance. Without a life- cycle cost analysis that captures the total cost of the project, AOC cannot evaluate design alternatives on a total-cost basis. In terms of being well documented, we found that the estimate substantially met associated leading practices. In general, the information provided by AOC describes how the estimate was built up from engineering drawings, specifications, and design documents. In addition, there was evidence of documented management approval. However, the documentation AOC provided did not contain source data, such as from contractor bids or cost estimating databases. While AOC officials said they could obtain this information for our review, to the extent that it was available, our leading practices indicate that an estimate’s documentation should be detailed enough so that the derivation of each cost element can be traced to all sources allowing for the estimate to be easily replicated and updated. Because AOC could not readily provide the actual source data for the estimate, we determined that it did not fully meet the requirements of this characteristic. For the accuracy characteristic, we found the estimate substantially met leading practices. AOC’s acquisition approach to the building’s renewal involves its contracting with an architect, construction manager, and construction contractor, each of whom produced separate estimates. These estimates have a limited degree of independence because they were conducted for the same program office and focus only on the proposed contractor cost. However, this array of estimates has enabled AOC to make comparisons among them, determine similarities and differences, and develop a reasonably accurate assessment of estimated costs. For the credibility characteristic, we found the estimate partially met leading practices. During the course of our review, AOC conducted a risk and uncertainty analysis in accordance with a key leading practice for this characteristic. However, we found several issues affecting the quality of the analysis that AOC provided to us. For example, AOC’s analysis concluded that the Cannon Building’s renewal estimate had a confidence level that exceeded 90 percent—meaning there is a greater than a 90 percent probability that actual costs will be equal to or less than AOC’s estimate—which may be unreasonably high. However, an AOC official said that the agency will be reconsidering the confidence level once the design progresses further. In addition, we found that the method AOC used to model the project’s risks it identified (1) resulted in an unusually narrow range of estimated costs across the confidence intervals and (2) provides managers limited ability to understand the effects of individual risks.opposed to modeling them separately, AOC cannot identify relationships between risk elements and determine which risks have the greatest influence on project costs. As a result, AOC is limited in its ability to manage the risks. In addition, modeling the aggregated risks likely contributes to the overly narrow range of estimated costs over the Because AOC aggregated risks for the purposes of its analysis as confidence levels, which implies that AOC’s analysis overstates the effect of the risks. In reference to the comprehensive characteristic, we found that the estimate substantially met leading practices. In particular, the estimate uses an industry-standard WBS format and contains sufficient detail of the technical characteristics of the project. However, the estimate did not fully meet leading practices, in part because we did not find a consolidated list of ground rules and assumptions or descriptions of how these affected the estimate. Contract documents provide indications of ground rules and assumptions affecting the project. For example, contract specifications and drawings for the recently-awarded phase II work describe assumptions pertaining to availability of funding for contract options and accessibility of the site during the workday. However, we could not determine if these represented a comprehensive assessment of all ground rules and assumptions. According to the Cost Guide, because ground rules and assumptions can significantly affect cost and introduce risk, it is important that they are clearly documented in the estimate to enable areas of potential risk to be identified and resolved. For the well documented characteristic, we determined the estimate as having substantially met leading practices. We found, for example that cost elements from the estimate can be compared to information in the drawings and specifications that define the contract for the phase II restoration. This linkage enables a good understanding of key characteristics of the estimate. In addition, AOC attested that the estimate had been briefed to and approved by management. However, because records describing briefings of the estimate to management are not well documented, it is difficult to trace management’s recommendations for changes, feedback, and key decisions affecting the project. While AOC officials told us that its project staff and management communicate routinely as part of their normal business functions, documenting management briefings is important because some key personnel have changed over the life of the Capitol Dome restoration project, which originated more than 15 years ago. By not having well documented records of management briefings over the life of the project, AOC risks loss of continuity in its oversight. Considering the accuracy characteristic, we determined that the estimate partially met leading practices. We found, for example, that the estimating software and database used to construct the estimate included standard information, such as costs for labor, materials, and equipment. However, AOC did not consider the actual cost of previously completed restoration phases in updates to its estimate for the phase II work. While AOC used information on the price it paid for certain items in the phase I restoration work that are also part of the phase II project, it did not determine the contractor’s actual cost to perform the work. AOC officials said that having information on actual contractor costs for items may be of limited usefulness because market conditions and other factors determine what contractors bid and AOC ultimately pays. However, not having information on actual costs makes it difficult for AOC to assess the difference between the price it paid and its contractors’ costs to determine the estimate’s reasonableness. For the credibility characteristic, we determined that the estimate minimally met leading practices. We found, in contrast to the approach it took in developing the Cannon Building renewal estimate, that AOC did not conduct a risk and uncertainty analysis of the Capitol Dome restoration estimate. While AOC officials told us they have not conducted a quantitative-risk and uncertainty analysis of the estimate, they said they have taken steps to qualitatively assess and mitigate project risks. For example, AOC structured the phase II contract solicitation to have contractors include a base bid (phase IIA) and pre-priced options for later stages (phases IIB and IIC) of the work. According to AOC officials, by structuring the contract this way, AOC is protected from price escalations affecting the future phases. In addition, AOC established unit prices and quantity allowances for some restoration tasks that are of indefinite quantity, such as repairing cracks. According to AOC officials, this will help to control project costs because, while their quantity estimates may be imprecise, knowing the unit prices associated with the work should mitigate some cost risk. While these actions are encouraging, they are not documented within the context of a quantitative-risk and uncertainty analysis. As a result, we do not know how these actions relate to other risk mitigation efforts that may have been considered and why AOC chose these actions over other efforts. Moreover, not having a quantitative-risk and uncertainty analysis precludes establishing a level of confidence associated with achieving the estimated cost and limits AOC management’s ability to determine an appropriate level of contingency reserves that may be necessary to address risks that AOC has identified. AOC project’s cost-estimating guidance policy and two resulting project estimates that we reviewed may not be fully reliable because they incorporated some, but not all, leading practices in cost estimating. Because AOC’s project cost estimates inform Congress’s funding decisions and affect AOC’s ability to effectively allocate resources across competing projects in its capital program, there is a risk that funding decisions and resource allocations could be made based on information that is not reliable. We recognize that incorporating GAO’s cost- estimating best practices into AOC’s cost-estimating policy and guidance may involve additional costs—such as for conducting a risk and uncertainty analysis for projects and conveying the confidence level of the estimate to Congress and AOC managers. However, without investing in these practices, Congress risks making funding decisions and AOC management risks making resource allocation decisions without the benefit that a robust analysis of levels of risk, uncertainty, and confidence provide decision makers. To improve the Architect of the Capitol’s project-cost-estimating process, enhance the transparency of its related process, and allow for more informed decision making related to projects’ costs, we recommend that the Architect of the Capitol take the following two actions, to the extent that the benefits exceed the costs: incorporate leading practices we identified as lacking for cost estimating in the AOC’s cost-estimating guidance and policies, and for ongoing and future projects, submit the confidence level derived from risk and uncertainty analyses along with budget documentation to appropriate congressional decision makers, so that Congress is aware of the range of likely costs and AOC’s associated confidence levels. We provided a draft of this report to the Architect of the Capitol for review and comment. AOC agreed with our recommendations and provided us with additional context and information on specific actions that the AOC has taken or intends to take to more fully address our recommendations. For example, AOC said that it planned to revise its policies and procedures to require quantitative-risk and uncertainty analysis be done, as specified in the Cost Guide, for high-dollar-value projects prior to requesting construction funding. In addition, AOC said that it would explore the most effective approach for communicating to congressional decision makers the confidence level derived from risk and uncertainty analyses along with budget documentation. We made no changes to our draft based upon AOC’s comments. AOC’s comments are reprinted in appendix IV, followed by our response to AOC’s detailed comments. We are sending copies of this report to appropriate congressional committees and the Architect of the Capitol. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Lorelei St. James Director, Physical Infrastructure Issues. Associated task Determine estimate’s purpose, required level of detail, and overall scope; Determine who will receive the estimate. Determine the cost-estimating team and develop its master schedule; Determine who will do the independent cost estimate; Outline the cost-estimating approach; Develop the estimate’s timeline. In a technical baseline-description document, identify the program’s purpose and its system and performance characteristics and all system configurations; Any technology implications; Its program acquisition schedule and acquisition strategy; Its relationship to other existing systems, including predecessor or similar legacy systems; Support (manpower, training, etc.) and security needs and risk items; System quantities for development, test, and production; Deployment and maintenance plans. Define a work breakdown structure (WBS) and describe each element in a WBS dictionary (a major automated-information system may have only a cost element structure); Choose the best estimating method for each WBS element; Identify potential cross-checks for likely cost and schedule drivers; Develop a cost-estimating checklist. Clearly define what the estimate includes and excludes; Identify global and program-specific assumptions, such as the estimate’s base year, including time phasing and life cycle; Identify program schedule information by phase and program acquisition strategy; Identify any schedule or budget constraints, inflation assumptions, and travel costs; Specify equipment the government is to furnish as well as the use of existing facilities or new modification or development; Identify prime contractor and major subcontractors; Determine technology refresh cycles, technology assumptions, and new technology to Define commonality with legacy systems and assumed heritage savings; Describe effects of new ways of doing business. Associated task Create a data collection plan with emphasis on collecting current and relevant technical, programmatic, cost, and risk data; Investigate possible data sources; Collect data and normalize them for cost accounting, inflation, learning, and quantity adjustments; Analyze the data for cost drivers, trends, and outliers and compare results against rules of thumb and standard factors derived from historical data; Interview data sources and document all pertinent information, including an assessment of data reliability and accuracy; Store data for future estimates. Develop point estimate and compare it to an independent cost estimate Develop the cost model, estimating each WBS element, using the best methodology from the data collected, and including all estimating assumptions; Express costs in constant year dollars; Time-phase the results by spreading costs in the years they are expected to occur, based on the program schedule; Sum the WBS elements to develop the overall point estimate; Validate the estimate by looking for errors like double counting and omitted costs; Compare estimate against the independent cost estimate and examine where and why there are differences; Perform cross-checks on cost drivers to see if results are similar; Update the model as more data become available or as changes occur and compare results against previous estimates. Test the sensitivity of cost elements to changes in estimating input values and key assumptions; Identify effects on the overall estimate of changing the program schedule or quantities; Determine which assumptions are key cost drivers and which cost elements are affected most by changes. Determine and discuss with technical experts the level of cost, schedule, and technical risk associated with each WBS element; Analyze each risk for its severity and probability; Develop minimum, most likely, and maximum ranges for each risk element; Determine type of risk distributions and reason for their use; Ensure that risks are correlated; Use an acceptable statistical analysis method (e.g., Monte Carlo simulation) to develop a confidence interval around the point estimate; Identify the confidence level of the point estimate; Identify the amount of contingency funding and add this to the point estimate to determine the risk-adjusted cost estimate; Recommend that the project or program office develop a risk management plan to track and mitigate risks. Associated task Document all steps used to develop the estimate so that a cost analyst unfamiliar with the program can recreate it quickly and produce the same result; Document the purpose of the estimate, the team that prepared it, and who approved the estimate and on what date; Describe the program, its schedule, and the technical baseline used to create the estimate; Present the program’s time-phased life-cycle cost; Discuss all ground rules and assumptions; Include auditable and traceable data sources for each cost element and document for all data sources how the data were normalized; Describe in detail the estimating methodology and rationale used to derive each WBS element’s cost (prefer more detail over less); Describe the results of the risk, uncertainty, and sensitivity analyses and whether any contingency funds were identified; Document how the estimate compares to the funding profile; Track how this estimate compares to any previous estimates. Present estimate to management for approval Develop a briefing that presents the documented life-cycle cost estimate; Include an explanation of the technical and programmatic baseline and any uncertainties; Compare the estimate to an independent cost estimate (ICE) and explain any Compare the estimate (life-cycle cost estimate (LCCE)) or independent cost estimate to the budget with enough detail to easily defend it by showing how it is accurate, complete, and high in quality; Focus in a logical manner on the largest cost elements and cost drivers; Make the content clear and complete so that those who are unfamiliar with it can easily comprehend the competence that underlies the estimate results; Make backup slides available for more probing questions; Act on and document feedback from management; Request acceptance of the estimate. Update the estimate to reflect changes in technical or program assumptions or keep it current as the program passes through new phases or milestones; Replace estimates with EVM, and independent estimate at completion (EAC) from the Report progress on meeting cost and schedule estimates; Perform a post mortem and document lessons learned for elements whose actual costs or schedules differ from the estimate; Document all changes to the program and how they affect the cost estimate. The House Appropriations Committee report accompanying the fiscal year 2014 Legislative Branch Appropriations Bill (H.R. 2792) mandated that we review the Architect of the Capitol’s (AOC) cost estimating methodology to ensure that AOC is accounting for all of the variables that should contribute to project cost estimates. This report addresses the extent to which AOC’s policies and guidance for developing cost estimates conform to leading practices identified in GAO’s Cost Estimating and Assessment Guide and provide a reliable basis to support funding and capital program decisions. This report also examines whether the estimates for the Capitol Dome and the Cannon Building projects reflect leading practices. To determine the extent to which AOC’s policies and guidance comply with leading practices for cost estimating and provide a reliable basis to support funding and program decisions, we compared AOC’s documents to leading practices set forth in GAO’s Cost Estimating and Assessment Guide. The Cost Guide identifies 12 leading practices that represent work across the federal government and are the basis for a high-quality, reliable cost estimate. An estimate created using the leading practices exhibits four broad characteristics: it is accurate, well documented, credible, and comprehensive. That is, each characteristic is associated with a specific set of leading practices. In turn, each leading practice is made up of a number of specific tasks. (See app, I for a listing of the tasks that make up each of the 12 leading practices.) When the tasks associated with the leading practices that define a characteristic are mostly or completely satisfied, we consider the characteristic to be “substantially” or “fully” met. When all four characteristics are at least “substantially met”—we consider a cost estimate to be reliable.shared our analysis with AOC officials to review, comment, and provide additional information, and we adjusted our analysis where appropriate. In reference to our examination of the estimates for the Capitol Dome and Cannon Building projects, we selected these projects based on their significance to the Capitol Complex, comparatively high costs, and public visibility. In addition, our review of the Cannon Building estimate is responsive to a mandate in the House Appropriations Committee report accompanying the fiscal year 2010 Legislative Branch Appropriations Bill (H.R. 2918) requiring us to monitor the progress of the project.selected versions of the estimates to review that had the most complete information for our assessments. We assessed the reliability of data used in developing the estimates and found them to be sufficiently reliable for the purposes of this report. For example, while some source data were unavailable to us, we were able to assess AOC’s process for building its estimates with the data and check for errors. We also interviewed AOC’s staff and its technical consultant about these projects and observed existing conditions at the Cannon Building and Capitol Dome. While our review of these projects’ cost estimates provides key insights and illustrates products of AOC’s cost-estimating policies and guidance, the results of our review should not be used to make generalizations about all AOC project-cost estimates. We shared our analysis with AOC officials to review, comment, and provide additional information, and we adjusted our analysis where appropriate. We conducted our work from September 2013 to March 2014 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Appendix III: GAO’s Summary Assessments of AOC’s Project Cost Estimates for the Cannon Building’s Renewal and Capitol Dome’s Restoration Estimating leading practice characteristic and rationale for assessment Comprehensive The cost estimate should include both government and contractor costs of the program over its full life cycle, from inception of the program through design, development, deployment, operation and maintenance, to retirement of the program. It should also completely define the program, reflect the current schedule, and be technically reasonable. Comprehensive cost estimates should be structured in sufficient detail to ensure that cost elements are neither omitted nor double counted. Specifically, the cost estimate should be based on a product-oriented work breakdown structure (WBS) that allows a program to track cost and schedule by defined deliverables, such as hardware or software components. Finally, where information is limited and judgments must be made, the cost estimate should document all cost-influencing ground rules and assumptions. The Cannon Building cost estimate did not include all life cycle costs as AOC guidance did not require these life cycle costs to be identified. The estimate tracked well with the work required, as it had a WBS structure consistent with this type of effort, and clearly laid out and appeared to update ground rules and assumptions based on evolving understanding of the scope of work. The Capitol Dome cost estimate covers the construction phase of the dome’s restoration. The cost estimate relies on specifications and drawings that exist in separate files. The cost estimate uses a standard WBS structure but does not include the underlying data. AOC submitted an extract of the data for our review. While the specifications include ground rules and assumptions, the cost estimate does not specify them. Well documented A good cost estimate—while taking the form of a single number—is supported by detailed documentation that describes how it was derived and how the expected funding will be spent in order to achieve a given objective. Therefore, the documentation should capture in writing such things as the source data used, the calculations performed and their results, and the estimating methodology used to derive each WBS element’s cost. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources so that the estimate can be easily replicated and updated. The documentation should also discuss the technical baseline description and how the data were normalized. Finally, the documentation should include evidence that the cost estimate was reviewed and accepted by management. The Cannon Building cost estimate documentation did not provide source data. It did provide a reasonable explanation for how the cost estimates were prepared, what type of data were used, and the basic estimating methodologies indicating some step by step processes. The documentation provided consisted of tables of unit values, quantities, and extended values. While extended costs were identified, it was difficult to follow the roll-up of these costs. Without a copy of or access to the cost model, we were unable to verify the accuracy of escalation calculations and we could not trace all of the logic from the source data to the resulting estimate. The calculations and changes to the baseline were not well documented but subsequent data and explanations highlighted the process for the changes. Additionally, soft costs and contingencies, comprising a substantial portion of total program costs, were only superficially addressed. There was evidence of management approval of the estimate. The Capitol Dome cost estimate captures all source documents used. The cost estimate relies on a WBS and shows how the calculations were performed. A cost analyst unfamiliar with the program could develop the cost estimate, although in places it is not clear. The cost estimate has a brief reference to the source documents used. Costs are presented in constant-year and then-year dollars. Estimating leading practice characteristic and rationale for assessment Accurate The cost estimate should provide for results that are unbiased, and it should not be overly conservative or optimistic. An estimate is accurate when it is based on an assessment of most likely costs, adjusted properly for inflation, and contains few, if any, minor mistakes. In addition, a cost estimate should be updated regularly to reflect significant changes in the program—such as when schedules or other assumptions change—and actual costs, so that it is always reflecting current status. During the update process, variances between planned and actual costs should be documented, explained, and reviewed. Among other things, the estimate should be grounded in a historical record of cost estimating and actual experiences on other comparable programs. The Cannon Building cost estimate included an uncertainty analysis indicating what confidence level the budget fell within, which was greater than 90 percent. Additionally, in the absence of having the cost model used to prepare the estimate, we were only able to validate the accuracy of a very small sample of WBS elements. Furthermore, while the estimate documentation identified inflation adjustments, because of missing calculation and conversion factors, we were unable to determine if the estimate has been adjusted properly for inflation. However, the documentation did provide a discussion of what and how the inflation adjustments were applied. The Capitol Dome cost estimate did not include a risk and uncertainty analysis indicating what confidence level for the estimate the budget was set at. The estimate is shown in constant-year and then-year dollars using indices for inflation. The indices used to adjust for inflation appear to be out-of-date. The estimate has few errors. AOC updated its cost estimate prior to beginning the project, but once the project began, AOC did not update the estimate. AOC did not document variances between planned and actual costs. AOC relied on historical records and on an industry database to support its estimate. The estimate uses an engineering build-up throughout, which appears appropriate for this project. Credible The cost estimate should discuss any limitations of the analysis because of uncertainty or biases surrounding data or assumptions. Major assumptions should be varied, and other outcomes recomputed to determine how sensitive they are to changes in the assumptions. Risk and uncertainty analysis should be performed to determine the level of risk associated with the estimate. Further, the estimate’s cost drivers should be crosschecked, and an independent cost estimate conducted by a group outside the acquiring organization should be developed to determine whether other estimating methods produce similar results. The Cannon Building cost estimate includes analyses of sensitivity, risk, and uncertainty. However, the sensitivity analysis was performed after the fact and was not used to inform the budget or provided for management consideration of risk prior to project start. AOC’s risk and uncertainty analysis indicates that the budget for the project is set above the 90% confidence level, which may be unreasonably high. AOC has received multiple estimates from different sources (architect, construction manager, pre-construction contractor) as a cross-check for reasonableness. While these estimates are not necessarily outside the AOC program office’s influence, they do appear to represent independent estimates based on each company’s interpretation of the project’s requirements and helped to validate the reliability of the estimates. 1. We continue to believe that a quantitative risk and uncertainty analysis enables more effective oversight than can be obtained by taking a qualitative approach. A quantitative risk and uncertainty analysis conveys the confidence level in achieving the most likely cost and provides actionable information about cost, schedule, and technical risks that cannot be obtained qualitatively. 2. While AOC’s contractors act independently, AOC’s Project Management Division is responsible for the estimates prepared via contract, and AOC’s Cost Estimating Group is responsible for reviewing them. Because AOC’s Project Management Division and Cost Estimating Group staff are part of the same Planning and Project Management organization, the Cost Estimating Group staff could be biased because of organizational influences and unable to provide a fully objective review of costs to the Project Management Division. 3. While our leading practices do not specify target confidence levels, experts we consulted with in developing our leading practices agreed that program cost estimates should be budgeted to at least the 55 percent confidence level and potentially as high as 80 percent. While AOC has good reasons for its interest in maintaining the project’s budget at a high confidence level, we believe that the confidence level for the project’s budget—exceeding 90 percent—may be excessive. 4. AOC identified 60 risks to the project and modeled them in aggregate, as opposed to individually, in conducting its quantitative risk and uncertainty analysis. Modeling the aggregated risks precludes AOC from identifying relationships among risk elements and determining which risks have the greatest influence on project costs. We continue to believe that modeling the risks in aggregate likely contributes to the overly narrow range of estimated costs over the confidence levels, and can overstate the effect of the risks. 5. Our leading practices indicate that an estimate’s documentation should be detailed enough so that the derivation of each cost element can be traced to all sources allowing the estimate to be easily replicated and updated. Because some information was not readily available in documents maintained by AOC, we were unable to determine if the estimate has been adjusted properly for inflation. 6. AOC has taken positive steps in conducting a life-cycle cost analysis that considered fixed costs and ongoing maintenance costs to inform the AOC’s selection of mechanical systems for heating and cooling the Cannon Building. However, AOC did not conduct life-cycle analyses for other components of the project, such as the building’s roof and alarm system. This precludes AOC from capturing the total cost of the project and evaluating design alternatives on a total-cost basis. 7. AOC takes positive steps to qualitatively identify risks and uncertainty and assess sensitivity. While this analysis is useful in setting contingencies and managing risks during construction, a quantitative risk and uncertainty analysis enables more effective oversight because it conveys the confidence level in achieving the most likely cost and provides actionable information about cost, schedule, and technical risks that cannot be obtained qualitatively. 8. While AOC has used some information from completed phases of the Capitol Dome’s restoration project to inform development of the estimate for later phases, not having information on actual costs makes it difficult for AOC to assess the difference between the price it paid and its contractors’ costs to determine the estimate’s reasonableness. 9. Our leading practices indicate that an estimate’s documentation should be detailed enough so that the derivation of each cost element can be traced to all sources allowing the estimate to be easily replicated and updated. Because some information was out-of-date at the time of our review, we were unable to determine if the estimate had the proper escalation adjustments. In addition to the contact named above Michael Armes, Assistant Director; Karen Richey, Assistant Director; George Depaoli, Analyst-in- Charge; Laura Erion; Emile Ettedgui; Colin Fallon; Geoffrey Hamilton; James Manzo; Faye Morrison; and Vanessa Welker made key contributions to this report. | AOC is responsible for the maintenance, renovation, and new construction of the U.S. Capitol complex, which comprises more than four dozen facilities. Reliable cost estimates for projects are crucial to AOC's capital-planning and construction processes. The House Appropriations Committee report accompanying the fiscal year 2014 Legislative Branch Appropriations bill mandated that GAO review AOC's cost-estimating methodology. This report addresses the extent to which AOC's policies and guidance for developing cost estimates adheres to leading practices. GAO analyzed AOC's cost-estimating guidance, interviewed AOC officials, and compared AOC's cost-estimating guidance and documentation and two projects' cost estimates to leading practices in GAO's Cost Guide . When most or all of the practices associated with each characteristic of a high-quality, reliable estimate are followed, GAO considers the characteristic to be “fully” or “substantially” met. When, in turn, all four characteristics are at least “fully” or “substantially” met, GAO considers a cost estimate to be reliable. GAO's Cost Estimating and Assessment Guide ( Cost Guide ) defines 12 leading practices that are associated with four characteristics— c omprehensive, well documented, accurate , and credible —that are important to developing high-quality, reliable project-cost estimates. Using the Cost Guide , GAO determined that the Architect of the Capitol's (AOC) cost-estimating guidance conforms to leading practices for developing estimates that are, in general, comprehensive and well-documented . However, AOC's guidance does not substantially conform to leading practices related to developing cost estimates that are accurate and credible . For example, pertaining to the credible characteristic, AOC's guidance does not require determining the confidence level of estimates or quantifying the extent to which a project's costs could vary due to changes in key assumptions. GAO found the strengths and weaknesses of AOC's guidance generally reflected in the cost estimates for AOC's Cannon House Office Building's (Cannon Building) renewal project ($753 million) and Capitol Dome's restoration project ($125 million). Cannon Building renewal—GAO found the estimate is substantially comprehensive, well documented, and accurate, but several factors that affect its credibility are lacking. For example, AOC's risk analysis does not allow for determination of which risks have the greatest influence on project costs and may overstate the effect of the risks. Capitol Dome restoration—GAO found the estimate is substantially comprehensive and well documented, but lacking key analysis that support accurate and credible estimates. For example, AOC did not use actual costs from completed phases to update its estimates and did not complete a risk and uncertainty analysis. Overall, AOC's cost-estimating guidance may not enable fully reliable estimates because it incorporates some, but not all, leading practices. Without reliable cost estimates that convey their confidence levels, AOC's projects risk experiencing cost overruns or budget surpluses, missed deadlines, and performance shortfalls. Potential limitations in the reliability of AOC's estimates may make it difficult for Congress to make well-informed funding decisions and affect how AOC allocates resources across competing projects in its capital portfolio. Source: GAO analysis of AOC documents and data. = Fully Meets =Substantially Meets =Partially Meets =Minimally Meets = Does Not Meet Note: A characteristic is fully met when the associated tasks of underlying leading practices are completely satisfied; substantially met when a large portion of the associated tasks are satisfied; partially met when about half of the associated tasks are satisfied; minimally met when a small portion of the associated tasks are satisfied; and not met when none of the associated tasks are satisfied. GAO recommends that AOC incorporate additional leading practices into its cost-estimating guidance and submit the confidence levels of project estimates to Congress. AOC concurred with the recommendations and provided context and clarification on its cost-estimating guidance and policies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The United States has been the largest single donor to HIV/AIDS prevention in developing countries, contributing over $500 million in Africa between fiscal year 1988 and 2000 through the U.S. Agency for International Development (USAID). The agency’s efforts have mainly been directed at specific target groups to reduce the spread of the disease through behavior change communication activities; promotion of increased condom use; and improved prevention, diagnosis, and treatment of sexually transmitted infections. In July 2000, USAID also began to fund other activities—such as treatment for tuberculosis and other opportunistic infections and care for AIDS orphans—aimed at mitigating the impact of the disease. USAID has a decentralized organizational structure (see fig. 1), which vests most of the authority for developing and implementing programs in the country offices, or missions. Four regional bureaus, such as the Africa Bureau, support field mission activities through the provision of technical, logistical, and financial assistance. The Global Bureau’s HIV/AIDS Division negotiates contracts, grants, and cooperative agreements with private voluntary organizations that missions can access for particular expertise, such as development of HIV/AIDS prevention communication campaigns. The Global Bureau also funds research that can be used to improve mission programs, supports the Joint United Nations Programme on HIV/AIDS (UNAIDS), and coordinates efforts by other U.S. government agencies, such as the Centers for Disease Control, to address the epidemic in developing countries. At the time of this review, USAID conducted HIV/AIDS activities at 19 missions in sub-Saharan Africa and implemented activities in other countries in the region from three of its regional offices. Throughout the 1990s HIV/AIDS prevalence continued to increase in most of the countries in sub-Saharan Africa (see fig. 2). The increasing prevalence of HIV/AIDS has had a substantial impact on the region’s population, resulting in (1) high death rates, (2) increased infant and child mortality, (3) reduced life expectancy, and (4) large numbers of orphans. The epidemic has also offset gains from investment in social and economic development. Despite the efforts of USAID and international donors, however, several challenges to slowing the epidemic’s spread remain. These include social, cultural, and political issues endemic to the region. The most direct impact of AIDS has been to increase the overall numbers of deaths in affected populations. UNAIDS estimates that since 1993, the number of people infected with HIV/AIDS in sub-Saharan Africa has tripled to 25.3 million and more than 17 million people have died. According to the U.S. Census Bureau, estimated death rates have increased by 50 to 500 percent in eastern and southern Africa over what they would have been without AIDS. For example, in Kenya the death rate is twice as high, at 14.1 per 1,000 population, as opposed to the 6.5 per 1,000 it would have been without AIDS. According to the U.S. Census Bureau, infant and child mortality rates in sub-Saharan Africa are also significantly higher than they would have been without AIDS. For example, in Zimbabwe infant mortality without AIDS would have been 30 per 1,000 in 2000. With AIDS, the infant mortality rate in 2000 was 62 per 1,000. The Census Bureau estimates that by 2010, more infants in Botswana, Zimbabwe, South Africa, and Namibia will die from AIDS than from any other cause. Rising child mortality rates due to AIDS are most dramatic in countries where death from other causes, such as diarrhea, had been significantly reduced. For example, in South Africa, Census Bureau data show that 45 percent of all deaths among children under age 5 in 2000 were AIDS related. In Zimbabwe, 70 percent of child deaths in 2000 were AIDS related, and AIDS-related deaths there are expected to increase to 80 percent by 2010. According to the World Bank, one of the most disturbing long-term trends associated with the HIV/AIDS epidemic is reduced life expectancy. By 2010 to 2015, life expectancy is expected to decline 17 years in nine countries in sub-Saharan Africa, to an average of 47 years. For example, the Census Bureau estimates that a child born in 2000 in Botswana can expect to live only 39 years. Without AIDS, that child would have a life expectancy of 71 years. In addition, the Census Bureau estimates that life expectancy in Botswana will decline to 29 years by 2010, a level not seen since the beginning of the 20th century. This dramatic decrease in life expectancy in the region represents a reversal of the gains of the past 30 years. Figure 3 shows the impact of AIDS on longevity in 13 sub-Saharan African countries. Also, because of AIDS, children in sub-Saharan Africa are being orphaned in increasingly large numbers. According to UNAIDS, by the end of 1999, approximately 13 million children worldwide had been orphaned by AIDS, with 95 percent of them in Africa. Further, according to a report prepared for USAID, orphans will eventually comprise up to 33 percent of the population under age 15 in some African countries. While orphans in Africa have traditionally been absorbed into extended families, the advent of the HIV/AIDS epidemic has caused these family structures to be overburdened, leaving many children without adequate care. The World Bank notes that orphans are more likely to be malnourished and less likely to go to school. According to UNAIDS, orphans are frequently without the means to survive and therefore may turn to prostitution or other behaviors that heighten their risk of contracting HIV themselves. Figure 4 shows the numbers of AIDS orphans in 12 African countries in 1999. The spread of HIV/AIDS has begun to negatively affect population growth rates in sub-Saharan Africa. Typically, developing countries experience a population growth rate of 2 percent or greater, compared with much lower rates in developed countries. As late as 1998, the Census Bureau predicted that the AIDS epidemic would have no effect on population growth in sub-Saharan Africa because of the region’s high fertility rate. However, the Census Bureau now predicts that by 2003, Botswana, South Africa, and Zimbabwe will all be experiencing negative population growth due to high prevalence of HIV and the low fertility and high infant and child mortality rates in these three countries. By 2010, the Census Bureau estimates that the growth rate for these countries will be (-1) percent, the first time that negative population growth has been projected for developing countries. Population growth is expected to stagnate in at least five other countries in the region, including Lesotho, Malawi, Mozambique, Namibia, and Swaziland. AIDS has had a significant effect on social and economic development in the region as increasing numbers of people in their most productive years have died. For example, according to USAID, AIDS directly affects the education sector as the supply of experienced teachers is reduced by AIDS-related illness and death. The World Bank estimates that more than 30 percent of the teachers in Malawi and Zambia are already infected with HIV. According to UNAIDS, during the first 10 months of 1998 1,300 teachers in Zambia died of AIDS—the equivalent of about 66 percent of all new teachers trained annually. In addition, fewer children are attending school. The death of a parent is a permanent loss of income that often requires the removal of children from school to save on educational expenses and to increase household labor and income. The agriculture sector has also been affected by the epidemic. Agriculture, the biggest sector in most African economies, accounts for a large portion of economic output and employs the majority of workers. However, as farmers become too ill to tend their crops, agricultural production declines for the country. For example, according to UNAIDS, in Côte d’Ivoire, many cases of reduced cultivation of crops such as cotton, coffee, and cocoa have been reported. Likewise, in Zimbabwe, agricultural output has fallen by 50 percent over a 5-year period during the late 1990s, due in part to farmers becoming sick and dying from AIDS. In addition, the cost of doing business in Africa has increased in many sectors of the economy due to HIV/AIDS. The epidemic’s costs to employers include expenditures for medical care and funeral expenses. A 1999 report prepared for USAID found that because of the increased levels of employee turnover due to HIV/AIDS, employers also are experiencing greater expenses due to the recruitment and training of new employees. According to the United Nations International Labour Office, to combat increased costs, some employers in sub-Saharan Africa have begun to hire or train two or three employees for the same position because of the concern that employees in key positions may get sick and die from AIDS. While international organizations have worked to stem the spread of the disease, funding constraints, cultural and social traditions, the low socioeconomic status of women, weak health care infrastructure, difficulty reaching men in uniform, and the slow response of national governments have impeded their efforts. In 2000, UNAIDS estimated that at least $3 billion is needed annually for HIV prevention and care in sub-Saharan Africa. By contrast, according to USAID, international donors contributed less than 20 percent of what was needed in fiscal year 2000 to support HIV/AIDS activities in the region. USAID—which has been the largest international donor to fight HIV/AIDS in Africa—spent $114 million in the region in fiscal year 2000, of its total worldwide HIV/AIDS budget of $200 million. As shown in table 1, USAID efforts translated into per capita expenditures for 23 sub-Saharan African countries in fiscal year 2000 ranging from $0.78 in Zambia to $0.03 in the Democratic Republic of the Congo. The social stigma surrounding issues of sex and death in African culture makes it difficult to discuss the risks of HIV/AIDS and measures to prevent the disease. A 2000 report by the Congressional Research Service notes that unwillingness by religious or community leaders to discuss condom use or risky behavior limits efforts to introduce condoms or HIV testing as ways to prevent further spread of the disease. According to UNAIDS, discrimination may also lead people who are infected to hide their status to protect themselves and their families from shame. For example, a 2000 UNAIDS report stated that in 1999 in Rusinga Island, Kenya, children whose parents had died of AIDS would tell others that witchcraft or a curse had been the cause of death instead. Traditional beliefs and practices in sub-Saharan Africa also contribute to the spread of the disease and limit the effectiveness of prevention programs. For example, a common custom promoted by traditional healers in Zambia is for a widow to engage in sexual relations to “cleanse” herself of the spirit of the deceased. Transmission of HIV in sub-Saharan Africa is primarily from heterosexual contact and, unlike other places in the world where men have higher rates of infection, 55 percent of people with AIDS in the region are women. According to UNAIDS, African girls aged 15 to 19 are approximately eight times more likely to be HIV positive than are boys their own age. Between the ages of 20 and 24, women are still three times more likely to be infected than men their age. These young women are usually infected by older men, often through coerced or forced sex, according to the Congressional Research Service. The higher infection rates among women are due, in part, to the higher vulnerability of the female reproductive tract to infection. However, according to UNAIDS, high infection rates are also caused by women’s limited ability to make informed choices to prevent the disease, due to their low socioeconomic status. Low levels of education for women in the region make it more difficult for them to find work, forcing them to rely on men for economic sustenance. According to USAID, laws in some countries, such as Kenya, do not allow women to inherit property. As a result, with no job skills or education, a woman may choose prostitution to support her children following the death of her spouse. In addition, because women lack economic resources of their own and may fear abandonment by or violence from their male partners, they have little or no control over how and when they have sex. According to UNAIDS, a woman may be fearful to ask her male partner to use a condom because he may interpret her actions as implying that she knows of his infidelities or that she has been unfaithful. The epidemic is overwhelming the already fragile health care systems in sub-Saharan Africa, and weak health care infrastructure is a barrier to diagnosis, treatment, and care of the affected populations. For example, in many countries in the region, up to one-half of the population does not have access to health care. The countries of the region frequently lack basic commodities such as syringes as well as safe drug storage, laboratories, and trained clinicians. Further, according to USAID, mother-to-child transmission of HIV is increased by the lack of access to drugs that block HIV replication, while this treatment has reduced mother-to-child transmission to less than 1 percent in developed countries. According to UNAIDS, AIDS patients take up a majority of the hospital beds in many cities, leaving non-AIDS patients without adequate care. For example, a 2000 World Bank report notes that in Côte d’Ivoire, Zambia, and Zimbabwe, HIV-infected patients occupy 50 to 80 percent of all beds in urban hospitals. According to the National Intelligence Council, HIV prevalence in African militaries is considerably higher than that of the general population. The Council estimates prevalence rates of 10 to 60 percent among military personnel in the region. For example, the HIV infection rate for the armed forces of Tanzania is estimated to be 15 to 30 percent, compared with about an 8 percent prevalence rate for the general population. According to USAID, in developing countries, military and police forces generally tend to be a young and highly mobile population that spends extended periods of time away from families and home communities. As a group, this population is likely to have more contact with casual sexual partners and commercial sex workers and engage in high-risk sexual behavior. As a result, the group is at increased risk of acquiring HIV/AIDS and transmitting it to the general population. Military and police forces have constant interaction with civilian populations where they are posted; therefore, they have been identified as an important target group for campaigns for the prevention and mitigation of HIV/AIDS. However, according to USAID, militaries have been unwilling to release detailed reports on HIV prevalence among troops, which has limited the ability of donor assistance groups such as USAID from working with African militaries and police forces. Another factor limiting USAID in working with African military and police forces is a legislative restriction prohibiting assistance for training, advice, or financial support to foreign military and law enforcement forces. In 1996, USAID’s General Counsel took the position that the restrictions do not prohibit participation of foreign police or military forces in USAID’s HIV/AIDS prevention programs if three conditions are met: (1) the programs for police and military forces are part of a larger public health initiative, and exclusion of these groups would impair achievement of the overall public health objectives; (2) the programs must be the same as those offered to the general population; and (3) neither the programs nor any commodities transferred under them can be readily adapted for law enforcement, military, or internal security functions. A USAID official in one country told us that the USAID legal adviser in her region requires a justification for each activity directed toward police or military forces and that this is a disincentive to pursuing such activities. Overall, we found that only 8 of the 19 missions reported working with the military or police forces. The mission in Nigeria indicated that it has provided HIV/AIDS prevention and impact mitigation services to military and police personnel. Also, the USAID missions in Ethiopia and Guinea have promoted condom acceptability and use among military personnel. Most national governments in sub-Saharan Africa have been slow to put effective HIV/AIDS policies in place. According to the World Health Organization, many countries in sub-Saharan Africa have not developed or completed a national strategic plan for reducing HIV/AIDS or provided sufficient resources or official support for HIV prevention efforts. For example, until 1999, the President of Zimbabwe denied that AIDS was a problem, and the President of Kenya did not endorse the use of condoms as a prevention method. In contrast, the President of Uganda has led a successful campaign against AIDS in his country, which, according to the Director of the Office of National AIDS Policy, contributed to the decrease in HIV prevalence. USAID has contributed to the fight against HIV/AIDS in sub-Saharan Africa, particularly through country-level activities, including education and counseling; condom promotion and distribution; and improved prevention, diagnosis, and treatment of sexually transmitted infections. In addition, USAID’s Global and Africa bureaus supported various activities in the areas of research, capacity building, integration of HIV/AIDS prevention activities into other development efforts, and advocacy for policy reform. (See app. II for a description of specific contributions made by the Global and Africa bureaus in these areas.) However, measuring the impact of HIV/AIDS interventions on reducing transmission of the virus is difficult, according to experts at Family Health International and the University of California Los Angeles. Overlapping contributions of HIV/AIDS prevention programs of national governments and of other donors make direct causal linkage of behavior or prevalence changes to USAID’s activities hard to measure. To assess its programs, USAID must rely on proxy measures because HIV has a long latency period, and limited surveillance data are available in the region. Generally accepted proxy measures include knowledge of HIV/AIDS and sexual behavior changes, such as increased condom use. However, gaps in data gathering and reporting, including the inconsistent use of indicators and the lack of a routine system for reporting program results, further limit USAID’s ability to measure its overall impact on reducing HIV transmission. USAID has focused its HIV/AIDS prevention activities in sub-Saharan Africa on three interventions that have been proven to be effective in the global fight against the epidemic: behavior change communications, condom social marketing, and treatment and management of sexually transmitted infections. USAID missions and regional offices in sub-Saharan Africa targeted their HIV/AIDS prevention activities to high-risk groups, such as commercial sex workers and interstate truck drivers. USAID maintains that a targeted approach remains the best way to reduce the number of new infections in the general population and to allow for more efficient use of limited HIV/AIDS prevention funds. Because of the difficulty obtaining accurate information on incidence and prevalence, however, USAID must rely on proxy indicators to measure the impact of its HIV/AIDS programs. USAID promotes behavior change through voluntary counseling and information campaigns to heighten awareness of the risks of contracting HIV/AIDS and spreading it to others. Specifically, these activities are to help motivate behavior change, heighten the appeal of health products and services, and decrease the stigma related to purchase and use of condoms. For example, the mission in Nigeria reported supporting an information campaign among sex workers, transport workers, and youth to increase condom use. In addition, the mission in Malawi supported voluntary HIV testing and counseling services in two cities, Lilongwe and Blantyre. Ten USAID missions and one regional office that conducted behavior change communication activities reported increased knowledge and awareness about HIV/AIDS, to measure the effectiveness of these types of programs. For example, six missions and one regional office provided information that showed an increase in knowledge of condoms as a means of preventing HIV infection among people surveyed. The mission in Ghana reported that there was an increase in the proportion of people who knew that a healthy-looking person could have HIV (from 70 percent of women and 77 percent of men in 1993 to 75 percent and 82 percent, respectively, in 1998) but reported no change in the proportion who were aware of mother-to-child transmission (82 percent of women and 85 percent of men in 1993; 83 percent and 85 percent, respectively, in 1998). Moreover, surveys conducted for the mission in Tanzania showed that, between 1994 and 1999, the percentage of women who could name three ways to avoid getting HIV/AIDS increased from 11.4 percent to 24.2 percent. In the same country, the increase for men was from 22.6 percent to 28.6 percent. USAID has also attempted to measure the effectiveness of behavior change communication activities to help change sexual behavior. In seven countries where USAID undertook such prevention programs, surveys suggested reductions in risky sexual behavior. For example, in Senegal, more men and women who were surveyed reported having used a condom in 1999 than in 1992. More male youth surveyed reported that they were using condoms with their nonregular sex partners in 1998 than in 1997. The same sexual behavior survey of female commercial sex workers showed an increased use of condoms with regular clients; however, female commercial sex workers also reported less frequent use of condoms with their nonregular partners. Also in Senegal, a greater percentage of girls reported in 1998 that they had never had sex compared to a prior survey conducted in 1997. However, there was no change for boys. In Zambia, more sexually active women who were surveyed in 1998 reported having ever used a condom than in a similar survey in 1992, and in 1998, fewer married men in Zambia’s capital city reported having had extramarital sex than in a survey conducted 8 years earlier. Condom social marketing, which relies on increasing the availability, attractiveness, and demand for condoms through advertising and public promotion, is another intervention that USAID supports at the country level. It is well established that condoms are an effective means to prevent the transmission of the HIV virus during sexual contact. The challenge for HIV/AIDS prevention then is one of expanded acceptance, availability, and use by high-risk groups. USAID projects in sub-Saharan Africa encourage production and marketing of condoms by the private sector to ensure the availability of affordable, quality condoms when and where people need them. USAID uses sales of condoms marketed through its program as a measure of the results of its condom promotion activities. USAID missions in 15 of 19 countries and one of three regional offices reported increased condom sales, with decreased sales reported in Malawi and Uganda. According to a USAID contractor, sales of condoms promoted under USAID’s program decreased in Malawi because of an economic downturn in that country and because another donor was providing free condoms. Sales in Uganda were affected by the introduction of a competing brand of condoms distributed by another donor. Between 1997 and 1999, the number of condoms sold more than doubled in Benin, from 2.9 million to 6.5 million, and increased in Zimbabwe from 2 million to 9 million. Condom sales in the Democratic Republic of the Congo grew more than 800 percent, from about 1 million in 1998 to 8.4 million in 1999. The number of sales outlets carrying socially marketed condoms also increased in Benin, Guinea, Malawi, and Mozambique. In addition to male condom marketing, five missions conducted social marketing of female condoms. Between 1998 and 1999, female condom sales increased in three of the four countries for which data were available but decreased in Zambia. Management of sexually transmitted infections through improved prevention, diagnosis, and treatment is another important component of USAID’s HIV/AIDS efforts, because the risk of HIV transmission is significantly higher when other infections, such as genital herpes, are present. USAID has continued to support standardized diagnosis and treatment of sexually transmitted infections. For example, in Madagascar, USAID’s program supported improved diagnosis and treatment by targeting interventions to high-risk populations. USAID has also worked to integrate the teaching of how to prevent sexually transmitted infections into its existing reproductive health and outreach activities. As a way to measure the impact of its activities to improve management of sexually transmitted infections, USAID tracks the number of people trained in prevention, diagnosis, and treatment in that area. Seven USAID missions in sub-Saharan Africa reported assisting in the expansion of services for management of sexually transmitted infections. For example, USAID reported that it worked in 10 primary health facilities in Kenya to develop guidelines for diagnosing symptoms typical of sexually transmitted infections, and to develop health worker training materials. A total of 1,112 outreach workers and 55 health care providers were trained in sexually transmitted disease case management. In addition, the mission in Ghana stated that in 1999 it trained more than 200 medical practitioners and a total of 502 health care workers in public health facilities in the management of sexually transmitted infections. In Ghana’s police services, USAID trained 12 health care providers to recognize symptoms of sexually transmitted infections, trained 65 police peer educators, and helped establish an HIV/Sexually Transmitted Disease Unit at the police hospital. In addition to these three main prevention interventions, USAID missions also implemented activities in other areas. A few missions had activities aimed at improving the safety of blood for transfusions. In 2000, for example, the mission in Tanzania began collaborating with the U.S. Centers for Disease Control and the Tanzanian Ministry of Health to improve blood safety and clinical protocols. The mission in Ethiopia continued programs that are directed at strengthening the capacity of nongovernmental organizations in the region to provide HIV services, while other missions worked to promote community involvement in providing care to those persons living with HIV. Twelve USAID missions and two regional offices promoted host government advocacy for improved HIV/AIDS policy environments. Some missions, such as Malawi, conducted workshops with key decisionmakers focusing on specific policy issues such as HIV testing and drug treatment for AIDS patients. The mission in Ghana sought to improve policies for reproductive health services through advocacy and policy development. According to USAID, its advocacy and policy development activities in Ghana led to the development of a national AIDS policy, which at the time of our review was available for parliamentary approval. Also, the mission in Nigeria indicated that its advocacy work on behalf of orphans and vulnerable children led the Nigerian President to announce in 2000 his intention to pursue free and compulsory education for them. The mission in Nigeria also reported helping establish three regional networks of people living with HIV/AIDS that later served as the precursor for a national HIV/AIDS support network. Although USAID has collected data about its HIV/AIDS activities, in reviewing the information we received from USAID, we found that the agency’s overall monitoring and evaluation efforts are weak in three areas: (1) missions and regional offices use inconsistent indicators to measure program performance, (2) data collection is sporadic, and (3) there is no requirement for missions and regional offices to regularly report the data they collect. USAID’s response to our request for baseline and trend data to demonstrate program results showed that missions and regional offices did not use indicators of program outcomes that were consistent over time. Unless the scope of the missions’ surveys and the questions asked remained constant over time, comparing results would be difficult. For example, a 1994 survey in Ethiopia asking females to cite at least two ways to prevent HIV focused on females living in urban areas, whereas a 2000 survey focused on females nationwide. In another example, ever-use of condoms among men in Zimbabwe in 1999, as an indicator, did not directly relate to the proportion of men who in 1994 reported currently using condoms. The missions also did not link each prevention activity to a performance indicator, as we had requested, in their written responses to our questions. This made it difficult for us to assess the progress of the activities. For example, the mission in Mozambique provided training to health care and non-health care providers in the treatment of sexually transmitted infections but did not link specific performance indicators related to these activities. Information obtained from USAID showed that the amount and frequency of data collection on HIV/AIDS prevention activities varied considerably. Several missions had implemented activities only recently, so baselines had not been established or trend data were still being collected. Ten missions were still in the process of gathering baseline or trend data for many of their activities. For example, although the mission in Mozambique provided us with baseline and trend data on condom sales and a baseline for risky sexual behavior, comparison data for the latter measure will not be available until 2001. The Democratic Republic of the Congo and Madagascar have conducted activities in a number of areas, such as treatment of sexually transmitted infections, but only provided data to us for condom sales. Three missions that indicated having blood safety programs did not provide output or outcome measures to evaluate those programs. These inconsistencies in data collection hindered our ability to assess whether USAID’s HIV/AIDS prevention activities were meeting USAID’s objectives in sub-Saharan Africa. For example, we could not evaluate 2 of the 19 missions and two of the three regional offices with HIV/AIDS programs because they did not provide any data. Four missions only provided information on condom sales and distribution. Eleven missions and one regional office offered a much broader range of information, although the data provided did not directly relate to all of each program’s indicators or major activities, making it too difficult to evaluate fully the result of each activity. For example, USAID’s Mozambique mission provided data on condom sales and distribution but not on mission-supported voluntary counseling and testing activities or on stigma reduction efforts. According to USAID, missions are not required to produce comprehensive monitoring and evaluation reports for each HIV/AIDS activity or indicator. Although in 1998 the Global Bureau established a repository for collecting and tracking performance data available to USAID organizational structures, including missions, there is no requirement for the missions to provide information to that database. Each mission provides USAID’s Africa Bureau with an annual Results Review and Resource Request, in which the mission presents some results from the previous year in order to justify budget requests. However, according to senior USAID officials in headquarters, this report is not a monitoring and evaluation tool. According to an epidemiologist from the University of California and a USAID contractor specializing in HIV/AIDS evaluation, surveillance, and epidemiological research, regular monitoring and evaluation of HIV prevention programs is necessary to prevent wasting resources on programs that do not function properly. USAID officials noted that while its missions use data to track day-to-day operations, the lack of a reporting requirement affects the agency’s ability to generalize about agency performance and make management and funding decisions based on the data. This lack also inhibits sharing best practices because the agency cannot compare across countries which approach may be best. Therefore, allocation of resources may not be optimal because the agency does not necessarily know which programs could benefit the most from financial investments. Without a reporting requirement, the agency has a limited ability to demonstrate the effectiveness of its programs. For example, USAID was unable to provide sufficient information as a basis for determining if it met its 1999 performance goal of reducing HIV transmission and impact in developing countries to meet the requirements of the Government Performance and Results Act of 1993. USAID has developed a three-pronged approach for programming the 53-percent funding increase from fiscal year 2000 to fiscal year 2001 ($114 million to $174 million) for HIV/AIDS prevention in sub-Saharan Africa. Under this approach, USAID (1) provided additional funds to countries designated in need of assistance, (2) allowed missions to expand or implement new activities and services, and (3) developed a plan for expanded monitoring and evaluation of the programs. To rank countries for funding priorities and allocations, USAID’s approach used several criteria, such as HIV/AIDS prevalence in a country, and economic impacts from the disease. Separately, USAID identified several internal and external factors that may affect its ability to expand its HIV/AIDS activities. USAID has identified steps to mitigate some of the problems associated with these factors. USAID identified three categories of countries that are to receive expanded HIV/AIDS assistance based on their relative priority for action. Four “Rapid Scale-Up Countries” were designated as those that will receive significant increases in assistance for prevention, care, and support activities “to achieve measurable impact within 1-to-2 years.” Eleven “Intensive Focus Countries” (including one regional program) will receive a significant scaling-up of prevention activities and expanded services that will provide care and support. USAID’s plans are to work with other donors in these two country categories to expand programs to cover at least 80 percent of their populations with a comprehensive package of prevention and care services. USAID also plans to expand the scope, targeted populations, and geographic coverage of current HIV/AIDS programs in 10 countries in the “Basic Program Countries” (including two regional programs). To determine which countries to include under each category, USAID used a number of criteria and conducted a worldwide survey of all USAID missions and regional offices. The criteria included the relative severity of the epidemic in the country, the magnitude of the epidemic in the country, the impact of the epidemic on the economy and society, the risk of a rapid increase in HIV prevalence, the availability of other funding sources, U.S. national interests, and strength of host country partnerships. USAID planners then supplemented these criteria with the missions’ and regional offices’ survey responses. Factors considered were the total level of resources that could be effectively obligated, the rapidity for obligating those funds, the likely programmatic impacts, the nature of new and expanded activities, and the personnel constraints that might be encountered, among other items. Table 2 shows the amount of increased funding from fiscal year 2000 to fiscal year 2001, by mission and regional program by category of country. New and expanded activities under USAID’s scaled-up efforts will include prevention of HIV transmissions from mother to child; development of community-based programs designed to provide care to children affected by HIV/AIDS; provisions of treatment and prevention of tuberculosis and other development of multisectoral programs, such as for girls’ education and finance for economic development efforts. USAID’s approach for scaling-up its HIV/AIDS programs in fiscal year 2001 included a plan for expanded monitoring and evaluation of the agency’s HIV/AIDS programs. Under the plan, USAID expects all missions receiving HIV/AIDS funding to collect and report data annually on HIV prevalence rates for 15- to 24-year-olds, and on condom usage with the last non-regular sexual partner. Depending on USAID activities in country, USAID missions may also be required to report periodically on additional indicators, such as total condoms sold, the percent of target populations requesting HIV tests, and others included in USAID’s “Handbook of Standard Indicators.” According to USAID, when implemented, these efforts will be conducted at routine intervals ranging from annual assessments to surveys conducted every 3 to 5 years. While the monitoring and evaluation plan applies to all country missions receiving HIV/AIDS funding, initial priority will be placed upon rapid scale-up and intensive focus countries. However, it is not clear when USAID plans to require the remaining countries to apply the standard indicators and collect and report the performance data. In addition, the plan does not specify to whom these performance data will be reported beyond the mission level or how the information will be used, for example, for resource allocation or identification of best practices. While USAID’s approach provides criteria for funding new USAID activities to reduce the spread of HIV/AIDS, USAID officials reported that a number of factors internal to USAID may hamper its efforts to expand HIV/AIDS programs in sub-Saharan Africa. These factors include problems with contracting and procurement, and reported declines in program and technical staff in both missions and headquarters. To deliver HIV/AIDS assistance programs, USAID uses competitive contracts and grants, including cooperative agreements. These agreements are generally made between USAID and private voluntary organizations, not-for-profit organizations, research centers, universities, and international organizations. The agreements involve substantial interaction between USAID and the recipient organization during performance of the assistance programs. USAID contracting officials reported that, on average, it takes 210 days for concluding cooperative agreements for the Global Bureau’s population, health, and nutrition activities, which include HIV/AIDS. This is one of the longest cycles for such agreements within the federal government. The officials further reported that USAID has been unable to recruit and retain sufficient numbers of qualified contract specialists, both in the missions and in Washington, and, as a result, the workload for the current specialists is high. For example, USAID reported that in 1998 its procurement personnel were responsible for $18.3 million worth of agreements per specialist. This was relatively higher than for procurement specialists in other federal agencies, such as the Departments of the Treasury and of Transportation ($5.3 million per specialist) and the Department of Energy ($2.9 million per specialist). In addition, USAID reported that currently each specialist is responsible, on average, for 26 distinct types of agreements, while some contract specialists in the field are responsible for procurements in multiple missions and regional programs. USAID officials said that the agency has worked to lessen the workload burden on contract specialists by taking such actions as developing a vehicle that allows missions to contract directly with contract awardees rather than through USAID headquarters. Agency officials reported that the requirement to “Buy American” is a second procurement issue that could affect the timing of USAID’s program expansion. According to USAID officials, when purchasing commodities for assistance programs, USIAD is required to buy those made in the United States. USAID officials stated that although this rule may be waived when a specific commodity required for the program can only be purchased from a foreign manufacturer, a waiver must be sought each time the commodity is purchased. According to these officials, the waiver process can take up to 4 weeks for each waiver, depending on the workload of the contracting specialist, the location of the office applying for the waiver, and the amount of the purchase. In January 2001, USAID instituted a policy to grant source and origin waivers for extended periods of time in emergency situations. For example, under this policy, USAID has approved an extended waiver through 2007 for HIV testing kits manufactured off shore. According to USAID, these kits allow for quicker test results and cost significantly less than those manufactured in the United States. Another factor USAID identified that may affect program expansion is the lack of sufficiently experienced personnel in missions to staff the scaled-up programs. From the end of fiscal year 1992 to the end of fiscal year 1999, total staff levels of USAID foreign service employees working overseas declined by 40 percent, from just over 1,080 to about 650. Between the end of fiscal year 1992 and the end of fiscal year 1999, the total number of overseas foreign service employees working in program management declined by 41 percent, while those working in support management (such as financial management and contracts) declined by almost 31 percent. USAID has tried to compensate for the loss of experienced personnel by entering into personal service contracts, particularly for support management positions like procurement. These contracts are short term, however, and officials stated that the contractors generally lack the experience, capabilities, and organizational knowledge of permanent employees. In addition, USAID reported it lacks sufficient personnel in some missions with the specialized, technical skills necessary for conducting new activities. For example, programs designed to reduce the incidence of mother-to-child HIV transmissions will require professionals experienced in medical fields, particularly those with nursing and pharmacological backgrounds. USAID also reports that in developing countries, the labor pool from which to draw individuals with medical backgrounds is small. Professionals were often recruited from organizations that provided similar services—the United Nations, other multinational assistance agencies, and private voluntary organizations. USAID also faces external factors related to the weak health care infrastructure common in sub-Saharan Africa that may affect the agency’s ability to expand its programs. These factors include a lack of surveillance, response, and prevention systems; limited numbers of skilled health care workers; and underdeveloped pharmaceutical distribution capabilities. Further, the capability of local, nongovernmental organization sectors to expand the scope of current services and deliver new services is not known. The low level of health care spending as a proportion of gross domestic product (GDP) derived from publicly financed health care spending has resulted in poor health care infrastructure and could affect USAID’s efforts to expand and create HIV/AIDS programs. In 1999, the U.S. Armed Forces Medical Intelligence Center reported that, with the exception of South Africa, sub-Saharan governments view health care as a low national priority. World Health Organization data indicate that in 1995, 1.7 percent of total GDP in sub-Saharan Africa derived from publicly financed health care spending. This rate was 35 percent lower than the proportion of GDP derived from publicly financed health care spending for all World Health Organization member states and 74 percent lower than the Organization’s figures for publicly financed health care spending in the United States. The Armed Forces Medical Intelligence Center reported that as a result of the low levels of publicly financed health care spending, the majority of sub-Saharan African countries have only rudimentary or no domestic systems for epidemiological surveillance, response, or prevention. Few Skilled Health Care Workers Another external factor that could affect USAID’s efforts to improve care and treatment for people with AIDS is the low numbers of skilled health care workers. In a 1998 report, the World Health Organization showed that in the sub-Saharan African countries in which USAID maintains missions, the number of physicians per 100,000 people ranged from a low of 2.3 per 100,000 people in Liberia (1997) to a high of 56.3 per 100,000 people in South Africa (1996). As a comparison, the ratio for the United States in 1995 was 279 physicians per 100,000 people. The number of nurses per 100,000 people is similarly low. South Africa showed the highest ratio, with 472 nurses per 100,000 people (1996), still less than one-half the rate of 972 per 100,000 in the United States (1996). Without adequate numbers of health care personnel, it will be difficult for USAID to meet its goals to improve care and treatment for people with AIDS. Underdeveloped pharmaceutical distribution and delivery capabilities could also affect USAID’s ability to provide the drugs needed for the prevention of mother-to-child HIV transmission and other care and treatment programs for opportunistic diseases. As stated in a 1999 GAO report, problems associated with these networks include outdated refrigeration units; a lack of reliable delivery trucks; and health care workers who have not been trained in the storage, handling, and usage of the pharmaceuticals. These factors tend to lead to low coverage rates for people needing the medicines, as well as high costs due to large amounts of wasted product. Most indigenous nongovernmental organizations currently delivering HIV/AIDS services in sub-Saharan Africa are small and operate solely in their home localities. However, missions do not routinely assess nongovernmental organization capacity on a countrywide basis. Therefore, it is unclear whether in the short term existing nongovernmental organizations have the capacity to expand their services either to new geographic areas or by increasing efforts within the presently served area. In addition, it is unclear whether capacity and technical expertise exist among nongovernmental organizations to provide new services, such as those for the prevention of mother-to-child transmission and other treatment and care. According to USAID, some of the new programmatic activities for this year’s increase will be directed toward helping nongovernmental organizations develop both technical expertise and managerial systems so that future year funding increases may be absorbed more readily. The AIDS epidemic in sub-Saharan Africa has grown beyond a public health problem to become a humanitarian and developmental crisis. USAID has contributed to the fight against HIV/AIDS in sub-Saharan Africa by focusing on interventions proven to slow the spread of the disease. However, USAID’s ability to measure the impact of its activities on reducing transmission of HIV/AIDS is limited by (1) inconsistent use of performance indicators, (2) sporadic data collection, and (3) lack of routine reporting of results to headquarters. As part of its approach for allocating the 53-percent increase in funding ($114 million to $174 million) for HIV/AIDS prevention activities in sub-Saharan Africa for fiscal year 2001, USAID prepared a plan to expand monitoring and evaluation systems in “rapid scale-up” and “intensive focus countries”—countries designated as in need of significant increases in assistance. However, when implemented, the monitoring and evaluation requirements in the plan will not initially include all countries where USAID missions and regional offices in sub-Saharan Africa implement HIV/AIDS programs. Further, the plan does not specify to whom these data will be reported or how the information will be used. Failure to address these issues not only inhibits USAID’s ability to measure the performance of its HIV/AIDS activities but also hinders the agency’s decision-making regarding allocation of resources among missions and regional offices and limits efforts to identify best practices. To enhance USAID’s ability to measure its progress in reducing the spread of HIV/AIDS in sub-Saharan Africa and better target its resources, we recommend that the Administrator, USAID, require that all missions and regional offices that conduct HIV/AIDS prevention activities select standard indicators to measure the progress of their HIV/AIDS gather performance data, based on these indicators, for key HIV/AIDS activities on a regular basis; and report performance data to a unit, designated by the Administrator, for analysis. We received written comments on a draft of this report from the U.S. Agency for International Development that are reprinted in appendix III. The agency acknowledged our key concern that performance indicators at the country level were inconsistent to measure progress over time and agreed that more comparable data are needed to assure better measurement of the overall impact of its HIV/AIDS programs. The agency stated that it is taking important steps, as recommended in the report, to facilitate the collection and dissemination of comparable national data. We modified our draft where appropriate to better reflect the agency’s contributions and actions it has recently taken to address some of the problems identified in our report. In addition, the agency also provided technical comments to update or clarify key information that we incorporated, where appropriate. We are sending this report to appropriate congressional committees and to the Administrator of USAID. We will also make copies available to other interested parties upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-8979. Other GAO contact and staff acknowledgments are listed in appendix IV. At the request of the Chairman of the Senate Subcommittee on African Affairs, Committee on Foreign Relations, we examined the U.S. Agency for International Development’s (USAID) efforts to reduce the spread of the Human Immunodeficiency Virus/Acquired Immunodeficiency Syndrome (HIV/AIDS) epidemic in sub-Saharan Africa. Specifically, we (1) identified the development and impact of the HIV/AIDS epidemic in sub-Saharan Africa and the challenges to slowing its spread, (2) assessed the extent to which the U.S. Agency for International Development’s initiatives have contributed to the fight against AIDS in sub-Saharan Africa, and (3) identified the approach the agency used to allocate increased funding and the factors that may affect the agency’s ability to expand its HIV/AIDS program in sub-Saharan Africa in response to this funding. To identify the development and impact of the HIV/AIDS epidemic in sub-Saharan Africa and the challenges to slowing its spread, we spoke with senior officials from the U.S. Agency for International Development’s Washington, D.C., headquarters (the Global Bureau’s HIV/AIDS Division and the Africa Bureau), the U.S. Bureau of the Census, the Office of National AIDS Policy, the State Department, and the Joint United Nations Programme on HIV/AIDS (UNAIDS). We reviewed relevant documents and reports from these agencies and from the U.N. International Labour Office; the National Intelligence Council; the World Bank; the World Health Organization; summaries of papers presented at the XIII International AIDS Conference in Durban, South Africa, in July 2000; and articles from scientific journals. To assess the extent to which USAID initiatives have reduced HIV transmission in sub-Saharan Africa, we reviewed USAID program documents that described the agency’s objective to reduce the transmission and mitigate the impact of HIV/AIDS. We reviewed documentation from the Global Bureau’s HIV/AIDS Division that described the activities and accomplishments of its portfolio of HIV/AIDS programs, and we held discussions with key USAID officials and contractors, including Family Health International, Population Services International, TVT Associates, and the Futures Group. To assess the contributions of the agency’s Africa Bureau, we reviewed the Results Review and Resource Request for the bureau and discussed performance data with key officials. At the country level, we sent a list of questions about activities, performance indicators used, and results achieved through fiscal year 2000 to the Africa Bureau, which distributed the questions to those missions and regional offices in sub-Saharan Africa that had implemented HIV/AIDS activities. We reviewed and consolidated the answers received from 19 USAID field missions and 3 regional offices that had HIV/AIDS activities. We examined program performance based on data received, which included results from local activity records and surveys, demographic and health surveys, behavioral surveillance surveys, and condom sales. We included country-specific information gathered from mission and regional Results Review and Resource Requests for fiscal year 2002, the Global Bureau’s HIV/AIDS Division, Population Services International, and Family Health International. We also contacted several missions via e-mail to follow up on and clarify information they provided in response to our questions. In addition, we supplemented our work by visiting USAID missions in Malawi, Tanzania, Uganda, and Zimbabwe and held discussions with the USAID Population, Health, and Nutrition officers to verify data provided in the written responses to our questions and to follow up on some key points. We chose these four countries to work in conjunction with other ongoing GAO work on disease surveillance in the region. These countries have some of the highest HIV/AIDS prevalence rates in the region and provide perspective on countries with new and established USAID HIV/AIDS programs. To discuss the impact of limited monitoring and evaluation data on USAID strategic planning, budgeting, and dissemination of best practices, we met with officials from USAID’s Bureau of Policy and Program Coordination. To identify the process USAID used to allocate increased funding and the factors that may affect how quickly USAID can expand its HIV/AIDS programs in the region, we held discussions with officials at USAID headquarters in Washington from the Global Bureau’s HIV/AIDS Division, Africa Bureau, and the Office of Procurement. We also conducted interviews of mission officials based in Kenya, Malawi, Tanzania, Uganda, Zambia, and Zimbabwe, and personnel employed by private voluntary organizations providing HIV/AIDS services under cooperative agreements with USAID. In addition, we reviewed budgetary, personnel, and contracting documentation and examined mission responses to a field survey on implementation of HIV/AIDS fiscal year 2001 that was conducted by the Africa Bureau, and planning documents based upon these surveys. Finally, we reviewed additional information provided by USAID, foreign governmental health ministries, the United Nations, and other multilateral assistance agencies. We conducted our work from April 2000 through January 2001 in accordance with generally accepted government auditing standards. In sub-Saharan Africa, USAID primarily implemented HIV/AIDS programs through three of its organizational structures: the Global Bureau’s HIV/AIDS Division, the Africa Bureau, and the field missions and regional offices. This appendix focuses on the key contributions of USAID’s Global and Africa Bureaus. The Global Bureau provided leadership in the areas of operations research, technical assistance, and capacity building for surveillance. The Africa Bureau led the effort to integrate HIV/AIDS activities into other sectors of country development programs. We discussed field mission contributions in the body of this report. In conducting operations research, the bureau is currently supporting 60 ongoing studies to test solutions to problems in the areas of management of sexually transmitted infections, care and support services, and policy analysis and change. Another Global Bureau project, started in 1995, has helped reform host government HIV/AIDS policies. For example, the project assisted Ethiopia in developing the regulations that established its National AIDS Council, which is responsible for coordinating and integrating HIV/AIDS initiatives. In addition, the project provided technical assistance, equipment, and training to the secretariats of the Addis Adaba Regional AIDS Council, which was formed in February 2000, and the Amhara Regional HIV/AIDS Task Force, formed in 1999. The Global Bureau provided technical assistance through several initiatives. For example, one project, begun in 1998, provides technical assistance to the Global Bureau’s HIV/AIDS Division, the regional bureaus, and the field missions. In addition to being a resource for the expertise needed to design HIV/AIDS strategic objectives and plans, the project was initiated to monitor processes, outcomes, and impacts of HIV/AIDS prevention programs. To achieve this goal, the project established a database to aggregate and disseminate research, implementation, and evaluation assessment findings. Another initiative was the development of a handbook of standard indicators, completed in March 2000, for measuring and evaluating HIV/AIDS prevention activities. This handbook is an important step toward providing universal measurement of HIV/AIDS prevention programs and could be used for comparison and tracking of program successes worldwide. The Global Bureau is also working in concert with the U.S. Centers for Disease Control to assist countries in sub-Saharan Africa develop appropriate HIV/AIDS surveillance guidelines; carry out research to address how to best measure HIV incidence, and estimate national HIV prevalence; and provide assistance to USAID missions to develop, improve, and use HIV/AIDS surveillance systems. According to USAID, the improved national surveillance systems should be in place to allow for annual measurement of HIV prevalence beginning in 2001. The Africa Bureau provided technical assistance to support mission activities and led the effort to promote the integration of HIV/AIDS prevention efforts into other development activities, such as economic growth, democracy and governance, education, and agriculture. Because of the impact of HIV/AIDS on the economies of the most affected countries, according to Africa Bureau officials, USAID’s strategy for economic growth must integrate HIV/AIDS activities to reach successful results. In the same way, the Africa Bureau is supporting the integration of HIV/AIDS activities into democracy and governance programs, including human rights, particularly those that advocate for women. According to USAID, it is important to integrate HIV/AIDS activities into the education sector because much of the progress made in developing countries over the past three decades has been due to greater numbers of youth going to school. Agriculture and natural resource development is important, since sustainable agriculture is necessary for economic development, and HIV/AIDS is a factor that leads to decreased production as more and more people get sick and die. To help national governments understand the effects of HIV/AIDS on various sectors and to help missions advocate for the development of sector-specific responses to the epidemic, the Africa Bureau funded the development of a set of toolkits and briefs. For example, the AIDS toolkit for the Ministry of Education helps officials recognize the internal and external impacts of HIV/AIDS—such as higher employee absenteeism and reduced school enrollment—and identify appropriate action responses. The commercial agriculture brief indicates how AIDS affects human resources and agricultural operations and provides some suggestions for contingency planning to deal with the impact of HIV/AIDS. The toolkits were discussed at two regional workshops organized by the University of Natal as part of a USAID contract held in Durban, South Africa, in 2000. The first workshop on education resulted in the formation of a task force. The purpose of the task force was to help ministries of education in different countries assess the impact of HIV/AIDS and apply the toolkit. The second workshop was for officials from the ministries of Planning and Finance. It offered a forum to discuss the impact of HIV/AIDS on the economy and changes in the government and development strategies that may be necessary to meet the crisis. The following are GAO’s comments on the U.S. Agency for International Development’s letter dated February 23, 2001. 1. USAID commented that the introduction and conclusions sections of the report did not reflect its accomplishments as presented in the body of the report. To highlight their accomplishments, USAID noted that the agency is the single largest donor in Uganda, Senegal, and Zambia, countries where the fight against AIDS has been successful. However, the agency fails to note that other sub-Saharan African countries, where USAID has HIV/AIDS programs, have not been as successful in the fight against AIDS. USAID acknowledges that success in countries is the result of the combined efforts of national governments, USAID, and other donors, not exclusively the work of one donor. Finally, appendix II of the report recognizes many of USAID’s contributions in operations research, technical assistance, and partnerships with other organizations, such as the U.S. Centers for Disease Control. Nonetheless, we have modified the report to describe the agency’s accomplishments contained in the body of the report. 2. USAID stated that the report did not fully recognize that performance data is collected and utilized for decision-making at both the mission and headquarters and for sharing lessons learned. We modified the report to clarify that USAID’s country-level missions use data to manage day-to-day operations. However, we found that inconsistent performance indicators and the lack of routine reporting of results to headquarters limits USAID’ s ability to assess its overall policies and approaches and thereby develop lessons learned from across all its missions. The UNAIDS publication cited by USAID is a summary of USAID supported research efforts shared with its partners. This document does not address our concern that USAID, based on information reported by its missions, develop a lessons learned assessment of best practices in combating AIDS that USAID headquarters can disseminate to all its missions. 3. USAID commented that the report did not cite important actions it has taken, such as developing a handbook of standardized indicators for HIV/AIDS programs. This handbook was discussed in the body of the report and highlighted among the contributions we cited in appendix II. The report recognized the handbook as an important step toward providing universal measurement of HIV/AIDS prevention programs. We have made no additional changes to the report. 4. USAID commented that the report did not include some important steps that USAID has taken to overcome internal factors that could hinder HIV/AIDS program expansion. USAID provided documentary evidence to support its assertion that the agency has streamlined its procurement policies for purchasing HIV/AIDS diagnostic kits. We therefore modified our report to add a specific reference to USAID’s initiation of a policy in January 2001 that extends a waiver of the “Buy American Act” requirements to allow for the purchase of HIV products manufactured offshore. In addition to Mr. Hutton, David Bernet, Leslie Bharadwaja, Aleta Hancock, Lynne Holloway, Jessica Lucas, Rona Mendelsohn, and Tom Zingale made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | The AIDS epidemic in sub-Saharan Africa has grown beyond a public health problem to become a humanitarian and developmental crisis. The Agency for International Development (AID) has contributed to the fight against human immunodeficiency virus (HIV)/AIDS in sub-Saharan Africa by focusing on interventions proven to slow the spread of the disease. However, AID's ability to measure the impact of its activities on reducing transmission of HIV/AIDS is limited by (1) inconsistent use of performance indicators, (2) sporadic data collection, and (3) lack of routine reporting of results to headquarters. As part of its approach for allocating the 53 percent increase in funding for HIV/AIDS prevention activities in sub-Saharan Africa for fiscal year 2001, AID prepared a plan to expand monitoring and evaluation systems in countries designated as in need of significant increases in assistance. However, when implemented, the monitoring and evaluation requirements in the plan will not initially include all countries where AID missions and regional offices in sub-Saharan Africa implement HIV/AIDS programs. Further, the plan does not specify to whom these data will be reported or how the information will be used. Failure to address these issues not only inhibits AID's ability to measure the performance of its HIV/AIDS activities but also hinders the agency's decision-making regarding allocation of resources among missions and regional offices and limits efforts to identify best practices. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Over the last three decades, Congress has enacted several laws to assist agencies and the federal government in managing IT investments. For example, to assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996. More recently, in December 2014, Congress enacted IT acquisition reform legislation (commonly referred to as the Federal Information Technology Acquisition Reform Act or FITARA) that, among other things, requires OMB to develop standardized performance metrics, including cost savings, and to submit quarterly reports to Congress on cost savings. In carrying out its responsibilities, OMB uses several data collection mechanisms to oversee federal IT spending during the annual budget formulation process. Specifically, OMB requires federal departments and agencies to provide information related to their Major Business Cases (previously known as exhibit 300) and IT Portfolio Summary (previously known as exhibit 53). OMB directs agencies to break down IT investment costs into two categories: (1) O&M and (2) development, modernization, and enhancement (DME). O&M (also known as steady-state) costs refer to the expenses required to operate and maintain an IT asset in a production environment. DME costs refers to those projects and activities that lead to new IT assets/systems, or change or modify existing IT assets to substantively improve capability or performance. In addition, OMB has developed guidance that calls for agencies to develop an operational analysis policy for examining the ongoing performance of existing legacy IT investments to measure, among other things, whether the investment is continuing to meet business and customer needs. Nevertheless, federal IT investments have too frequently failed or incurred cost overruns and schedule slippages while contributing little to mission-related outcomes. The federal government has spent billions of dollars on failed and poorly performing IT investments which often suffered from ineffective management, such as project planning, requirements definition, and program oversight and governance. Accordingly, in February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives underway, including reviews of troubled projects, an emphasis on incremental development, a key transparency website, data center consolidation, and the O&M of legacy systems. To make progress in this area, we identified actions that OMB and the agencies need to take. These include implementing the recently-enacted statutory requirements promoting IT acquisition reform, as well as implementing our previous recommendations. In the last 6 years, we made approximately 800 recommendations to OMB and multiple agencies to improve effective and efficient investment in IT. As of October 2015, about 32 percent of these recommendations had been implemented. We have previously reported on legacy IT and the need for the federal government to improve its oversight of such investments. For example, in October 2012, we reported on agencies’ operational analyses policies and practices. In particular, we reported that although OMB guidance called for each agency to develop an operational analysis policy and perform such analyses annually, the extent to which the selected federal agencies we reviewed carried out these tasks varied significantly. The Departments of Defense (Defense), the Treasury (Treasury), and Veterans Affairs (VA) had not developed a policy or conducted operational analyses. As such, we recommended that the agencies develop operational analysis policies, annually perform operational analyses on all investments, and ensure the assessments include all key factors. Further, we recommended that OMB revise its guidance to include directing agencies to post the results of such analyses on the IT Dashboard. OMB and the five selected agencies agreed with our recommendations and have efforts planned and underway to address them. In particular, OMB issued guidance in August 2012 directing agencies to report operational analysis results along with their fiscal year 2014 budget submission documentation (e.g., exhibit 300) to OMB. Thus far, operational analyses have not yet been posted on the IT Dashboard. We further reported in November 2013 that agencies were not conducting proper analyses. Specifically, we reported on IT O&M investments and the use of operational analyses at selected agencies and determined that of the top 10 investments with the largest spending in O&M, only a Department of Homeland Security (DHS) investment underwent an operational analysis. DHS’s analysis addressed most, but not all, of the factors that OMB called for (e.g., comparing current cost and schedule against original estimates). The remaining agencies did not assess their investments, which accounted for $7.4 billion in reported O&M spending. Consequently, we recommended that seven agencies perform operational analyses on their IT O&M investments and that DHS ensure that its analysis was complete and addressed all OMB factors. Three of the agencies agreed with our recommendations; two partially agreed; and two agencies had no comments. As discussed in our report, federal agencies reported spending the majority of their fiscal year 2015 IT funds on operating and maintaining a large number of legacy (i.e., steady-state) investments. Of the more than $80 billion reportedly spent on federal IT in fiscal year 2015, 26 federal agencies spent about $61 billion on O&M, more than three-quarters of the total amount spent. Specifically, data from the IT Dashboard shows that, in 2015, 5,233 of the government’s nearly 7,000 IT investments were spending all of their funds on O&M activities. This is a little more than three times the amount spent on DME activities (see figure 1). According to agency data reported to OMB’s IT Dashboard, the 10 IT investments spending the most on O&M for fiscal year 2015 total $12.5 billion, 20 percent of the total O&M spending, and range from $4.4 billion on Department of Health and Human Services’ (HHS) Centers for Medicare and Medicaid Services’ Medicaid Management Information System to $666.1 million on HHS’s Centers for Medicare and Medicaid Services IT Infrastructure investment (see table 1). Over the past 7 fiscal years, O&M spending has increased, while the amount invested in developing new systems has decreased by about $7.3 billion since fiscal year 2010. (See figure 2.) Further, agencies have increased the amount of O&M spending relative to their overall IT spending by 9 percent since 2010. Specifically, in fiscal year 2010, O&M spending was 68 percent of the federal IT budget, while in fiscal year 2017, agencies plan to spend 77 percent of their IT funds on O&M. (See figure 3.) Further, 15 of the 26 agencies have increased their spending on O&M from fiscal year 2010 to fiscal year 2015, with 10 of these agencies having over a $100 million increase. The spending changes per agency range from an approximately $4 billion increase (HHS) to a decrease of $600 million (National Aeronautics and Space Administration). OMB staff in the Office of E-Government and Information Technology have recognized the upward trend of IT O&M spending and identified several contributing factors, including (1) the support of O&M activities requires maintaining legacy hardware, which costs more over time, and (2) costs are increased in maintaining applications and systems that use older programming languages, since programmers knowledgeable in these older languages are becoming increasingly rare and thus more expensive. Further, OMB officials stated that in several situations where agencies are not sure whether to report costs as O&M or DME, agencies default to reporting as O&M. According to OMB, agencies tend to categorize investments as O&M because they attract less oversight, require reduced documentation, and have a lower risk of losing funding. According to OMB guidance, the O&M phase is often the longest phase of an investment and can consume more than 80 percent of the total lifecycle costs. As such, agencies must actively manage their investment during this phase. To help them do so, OMB requires that CIOs submit ratings that reflect the level of risk facing an investment. In addition, in instances where investments experience problems, agencies can perform a TechStat, a face-to-face meeting to terminate or turn around IT investments that are failing or not producing results. In addition, OMB directs agencies to monitor O&M investments through operational analyses, which should be performed annually and assess costs, schedules, whether the investment is still meeting customer and business needs, and investment performance. Several O&M investments were rated as moderate to high risk in fiscal year 2015. Specifically, CIOs from the 12 selected agencies reported that 23 of their 187 major IT O&M investments were moderate to high risk as of August 2015. They requested $922.9 million in fiscal year 2016 for these investments. Of the 23 investments, agencies had plans to replace or modernize 19 investments. However, the plans for 12 of those were general or tentative in that the agencies did not provide specificity on time frames, activities to be performed, or functions to be replaced or enhanced. Further, agencies did not plan to modernize or replace 4 of the investments (see table 2). The lack of specific plans to modernize or replace these investments could result in wasteful spending on moderate and high-risk investments. While agencies generally conducted the required operational analyses, they did not consistently perform TechStat reviews on all of the at-risk investments. Although not required, agencies had performed TechStats on only five of the 23 at-risk investments. In addition, operational analyses were not conducted for four of these investments (see table 3). Agencies provided several reasons for not conducting TechStats and required assessments. For example, according to agency officials, several of the investments’ risk levels were reduced to low or moderately low risk in the months since the IT Dashboard had been publicly updated. Regarding assessments, one official stated that, in place of operational analyses, the responsible bureau reviews the status of the previous month’s activities for the development, integration, modification, and procurement to report issues to management. However, this monthly process does not include all of the key elements of an operational analysis. Until agencies ensure that their O&M investments are fully reviewed, the government’s oversight of old and vulnerable investments will be impaired and the associated spending could be wasteful. Legacy IT investments across the federal government are becoming increasingly obsolete. Specifically, many use outdated languages and old parts. Numerous old investments are using obsolete programming languages. Several agencies, such as the Department of Agriculture (USDA), DHS, HHS, Justice, Treasury, and VA, reported using Common Business Oriented Language (COBOL)—a programming language developed in the late 1950s and early 1960s—to program their legacy systems. It is widely known that agencies need to move to more modern, maintainable languages, as appropriate and feasible. For example, the Gartner Group, a leading IT research and advisory company, has reported that organizations using COBOL should consider replacing the language and in 2010 noted that there should be a shift in focus to using more modern languages for new products. In addition, some legacy systems may use parts that are obsolete and more difficult to find. For instance, Defense is still using 8-inch floppy disks in a legacy system that coordinates the operational functions of the United States’ nuclear forces. (See figure 4.) Further, in some cases, the vendors no longer provide support for hardware or software, creating security vulnerabilities and additional costs. For example, each of the 12 selected agencies reported using unsupported operating systems and components in their fiscal year 2014 reports pursuant to the Federal Information Security Management Act of 2002. Commerce, Defense, Treasury, HHS, and VA reported using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago. Lastly, legacy systems may become increasingly more expensive as agencies have to deal with the previously mentioned issues and may pay a premium to hire staff or contractors with the knowledge to maintain outdated systems. For example, one agency (SSA) reported re-hiring retired employees to maintain its COBOL systems. Selected agencies reported that they continue to maintain old investments in O&M. For example, Treasury reported systems that were about 56 years old. Table 4 shows the 10 oldest investments and/or systems, as reported by selected agencies. Agencies reported having plans to modernize or replace each of these investments and systems. However, the plans for five of those were general or tentative in that the agencies did not provide specific time frames, activities to be performed, or functions to be replaced or enhanced. Separately, in our related report, we profiled one system or investment from each of the 12 selected agencies. The selected systems and investments range from 11 to approximately 56 years old, and serve a variety of purposes. Of the 12 investments or systems, agencies had plans to replace or modernize 11 of these. However, the plans for 3 of those were general or tentative in that the agencies did not provide specificity on time frames, activities to be performed, or functions to be replaced or enhanced. Further, there were no plans to replace or modernize 1 investment. We have previously provided guidance that organizations should periodically identify, evaluate, and prioritize their investments, including those that are in O&M; at, near, or exceeding their planned life cycles; and/or are based on technology that is now obsolete, to determine whether the investment should be kept as-is, modernized, replaced, or retired. This critical process allows the agency to identify and address high-cost or low-value investments in need of update, replacement, or retirement. Agencies are, in part, maintaining obsolete investments because they are not required to identify, evaluate, and prioritize their O&M investments to determine whether they should be kept as-is, modernized, replaced, or retired. According to OMB staff from the Office of E-Government and Information Technology, OMB has created draft guidance that will require agencies to identify and prioritize legacy information systems that are in need of replacement or modernization. Specifically, the guidance is intended to develop criteria through which agencies can identify the highest priority legacy systems, evaluate and prioritize their portfolio of existing IT systems, and develop modernization plans that will guide agencies’ efforts to streamline and improve their IT systems. The draft guidance includes time frames for the efforts regarding developing criteria, identifying and prioritizing systems, and planning for modernization. However, OMB did not commit to a firm time frame for when the policy would be issued. Until this policy is finalized and carried out, the federal government runs the risk of continuing to maintain investments that have outlived their effectiveness and are consuming resources that outweigh their benefits. Regarding upgrading obsolete investments, in April 2016, the IT Modernization Act was introduced into the U.S. House of Representatives. If enacted, it would establish a revolving fund of $3 billion that could be used to retire, replace, or upgrade legacy IT systems to transition to new, more secure, efficient, modern IT systems. It also would establish processes to evaluate proposals for modernization submitted by agencies and monitor progress and performance in executing approved projects. Our report that is being released today contains 2 recommendations to OMB and 14 to selected federal agencies. Among other things, we recommend that the Director of OMB commit to a firm date by which its draft guidance on legacy systems will be issued, and subsequently direct agencies to identify legacy systems and/or investments needing to be modernized or replaced and that the selected agency heads direct their respective agency CIOs to identify and plan to modernize or replace legacy systems as needed and consistent with OMB’s draft guidance. If agencies implement our recommendations, they will be positioned to better manage legacy systems and investments. In commenting on a draft of the report, eight agencies (USDA, Commerce, HHS, DHS, State, Transportation, VA, and SSA) and OMB agreed with our recommendations. Defense and Energy partially agreed with our recommendation. Defense stated that it planned to continue to identify, prioritize, and manage legacy systems, based on existing department policies and processes, and consistent to the extent practicable with OMB’s draft guidance. Energy stated that while the department continues to take steps to modernize its legacy investments and systems, it could not agree fully with our recommendation because OMB’s guidance is in draft and the department has not had an opportunity to review it. Defense and Energy’s comments are consistent with the intent of our recommendation. Upon finalization of OMB’s guidance, we encourage both agencies to implement OMB’s guidance. In addition, Justice and the Treasury stated that they had no comment on their recommendations. In summary, O&M spending has steadily increased over the past 7 years and as a result, key agencies are devoting a smaller amount of IT spending to DME activities. Further, legacy federal IT investments are becoming obsolete and several aging investments are using unsupported components, many of which did not have specific plans for modernization or replacement. This O&M spending has steadily increased and as a result, key agencies are devoting a smaller amount of IT spending to DME activities. To its credit, OMB has developed a draft initiative that calls for agencies to analyze and review O&M investments. However, it has not finalized its policy. Until it does so, the federal government runs the risk of continuing to maintain investments that have outlived their effectiveness and are consuming resources that outweigh their benefits. Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at [email protected]. Other key contributors include Gary Mountjoy (assistant director), Kevin Walsh (assistant director), Jessica Waselkow (analyst in charge), Scott Borre, Rebecca Eyler, and Tina Torabi. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The President's fiscal year 2017 budget request for IT was more than $89 billion, with much of this amount reportedly for operating and maintaining existing (legacy) IT systems. Given the magnitude of these investments, it is important that agencies effectively manage their IT O&M investments. GAO was asked to summarize its report being released today that (1) assesses federal agencies' IT O&M spending, (2) evaluates the oversight of at-risk legacy investments, and (3) assesses the age and obsolescence of federal IT. In preparing the report on which this testimony is based, GAO reviewed 26 agencies' IT O&M spending plans for fiscal years 2010 through 2017 and OMB data. GAO further reviewed the 12 agencies that reported the highest planned IT spending for fiscal year 2015 to provide specifics on agency spending and individual investments. The federal government spent more than 75 percent of the total amount budgeted for information technology (IT) for fiscal year 2015 on operations and maintenance (O&M) investments. Specifically, 5,233 of the government's approximately 7,000 IT investments are spending all of their funds on O&M activities. Such spending has increased over the past 7 fiscal years, which has resulted in a $7.3 billion decline from fiscal years 2010 to 2017 in development, modernization, and enhancement activities. Many IT O&M investments in GAO's review were identified as moderate to high risk by agency CIOs and agencies did not consistently perform required analysis of these at-risk investments. Until agencies fully review their at-risk investments, the government's oversight of such investments will be limited and its spending could be wasteful. Federal legacy IT investments are becoming increasingly obsolete: many use outdated software languages and hardware parts that are unsupported. Agencies reported using several systems that have components that are, in some cases, at least 50 years old. For example, the Department of Defense uses 8-inch floppy disks in a legacy system that coordinates the operational functions of the nation's nuclear forces. In addition, the Department of the Treasury uses assembly language code—a computer language initially used in the 1950s and typically tied to the hardware for which it was developed. OMB recently began an initiative to modernize, retire, and replace the federal government's legacy IT systems. As part of this, OMB drafted guidance requiring agencies to identify, prioritize, and plan to modernize legacy systems. However, until this policy is finalized and fully executed, the government runs the risk of maintaining systems that have outlived their effectiveness. The following table provides examples of legacy systems across the federal government that agencies report are 30 years or older and use obsolete software or hardware, and identifies those that do not have specific plans with time frames to modernize or replace these investments. In the report being released today, GAO is making multiple recommendations, one of which is for OMB to finalize draft guidance to identify and prioritize legacy IT needing to be modernized or replaced. In the report, GAO is also recommending that selected agencies address obsolete legacy IT O&M investments. Nine agencies agreed with GAO's recommendations, two partially agreed, and two stated they had no comment. The two agencies that partially agreed, the Departments of Defense and Energy, outlined plans that were consistent with the intent of GAO's recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Conducting research is one of VA’s core missions. VA researchers have been involved in a variety of important advances in medical research, including development of the cardiac pacemaker, kidney transplant technology, prosthetic devices, and drug treatments for high blood pressure and schizophrenia. For fiscal year 2000, Congress appropriated $321 million for VA’s research programs, which support a wide range of human, animal, and basic science studies. VA uses a competitive funding process in which its Office of Research and Development (ORD) allocates about $296 million of these funds to VA researchers, with awards based on scientific merit and potential contribution to knowledge of issues of particular concern to VA. VA allocates most of the remainder to indirect costs of research, which includes support for the human subjects protection system. Besides the appropriation for research, VA allocates funds from its medical care appropriation to support the research infrastructure at medical centers such as laboratory facilities and investigator salaries. In fiscal year 2000, this allocation amounted to $343 million. VA researchers receive additional grants and contracts from other federal agencies such as the National Institutes of Health (NIH), research foundations, and private industry sponsors, including pharmaceutical companies. In fiscal year 1999, these additional funds amounted to approximately $481 million. Nonprofit research foundations linked to VA medical centers control some of these non-VA research funds.In fiscal year 2000, biomedical or behavioral research involving human subjects is being conducted at about 70 percent of VA medical centers. VA is responsible for ensuring that all human research it conducts or supports meets the requirements of VA regulations, regardless of whether that research is funded by VA, the subjects are veterans, or the studies are conducted on VA grounds. Responsibility for administration and oversight of the research program has rested primarily with ORD. Recently, VA created the Office of Research Compliance and Assurance (ORCA), which has been charged with advising the Under Secretary for Health on all matters affecting the integrity of research protections for humans and animals, promoting the ethical conduct of research, and investigating allegations of research improprieties. Some VA research is also subject to oversight by two HHS components. The Food and Drug Administration (FDA) is responsible for protecting the rights of human subjects enrolled in research with products it regulates— drugs, medical devices, biologics, foods, and cosmetics. Research that involves human subjects and is funded by HHS is subject to oversight by its Office for Human Research Protections (OHRP).HHS requires institutions conducting human research with HHS funds to file a document with OHRP that indicates a commitment to comply with federal regulations. This document, called an assurance, may cover a single study (a single project assurance), or it may allow the institution to conduct multiple studies (a multiple project assurance).When an institution files a multiple project assurance with OHRP, all federally funded research involving human subjects at that institution must comply with HHS regulations. Both FDA and OHRP have the authority to monitor those studies conducted under their jurisdiction, and each can take action against investigators, IRBs, or institutions that fail to comply with applicable regulations. Research with human subjects conducted at VA facilities is governed by regulations designed to protect their rights and welfare. These regulations establish minimum standards for the conduct and review of research to ensure that research involving human subjects is conducted in accordance with the three ethical principles outlined by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.First, the principle of respect for persons requires acknowledgement of individual autonomy, and conversely, the need to protect those with diminished autonomy. In practice, this principle requires that subjects give informed consent to participate in research; that is, they must be given sufficient information about a study including its purpose, procedures, to decide whether to participate. They must also understand this information, and their consent must be voluntary. Second, the principle of beneficence requires that the expected benefits of research to the individual or to society outweigh its anticipated risks. Third, the principle of justice requires fair subject selection procedures, so that both the benefits and the burdens of research are distributed across a number of individuals in a just manner. In 1981, in response to the National Commission, both HHS and FDA promulgated revised regulations for the protection of human subjects. Seventeen federal departments and agencies, including HHS and VA, have adopted the core of HHS regulations.FDA’s regulations are slightly different from those adopted by HHS and VA. To safeguard the rights of subjects and promote ethical research, these federal regulations create a system in which the responsibility for the protection of human subjects is assigned to three groups. Investigators are responsible for conducting their research in accordance with applicable federal regulations and for ensuring that legally effective consent is obtained from each subject or his or her legally authorized representative. Institutions are responsible for establishing oversight mechanisms for research including establishing local committees known as institutional review boards (IRB), which are responsible for reviewing research proposals before studies are initiated and after they are under way to help ensure that research is conducted in accordance with the three principles described above. Agencies, including VA, are responsible for ensuring that their IRBs comply with applicable federal regulations and that they have sufficient space and staff to accomplish their obligations. VA requires each of its medical centers that engages in research with human subjects to establish its own IRBor secure the services of an IRB at an affiliated university. As of August 2000, approximately 40 percent of the medical centers conducting research with human subjects relied on an IRB at an affiliated university. The IRB sends its recommendations to the VA medical center’s research and development committee, which is responsible for maintaining standards of scientific quality, laboratory safety, and the safety of human and animal subjects. The research and development committee is charged with reviewing each study’s budget; assessing the availability of needed space, personnel, equipment, and supplies; and determining the effect of the planned research on the investigator’s other responsibilities, including the provision of clinical services. The committee can disapprove a study; however, VA regulations prevent the research and development committee (or any other institutional official or body) from overturning an IRB decision to disapprove a study. A VA investigator who wants to conduct research with human subjects must develop a research plan (called a protocol), supporting documents, and a consent form. The consent form is designed to provide potential subjects with sufficient information about the study, including its procedures, risks, and benefits, to allow the subject to make an informed decision about whether to participate in the study (see fig. 1). The investigator then submits these materials for review. The study is not to be initiated until both the IRB and the research and development committee have approved it, and these committees may insist on changes to the protocol or consent form. Once approval has been given, VA regulations prohibit any unapproved changes to the study’s procedures, unless doing so is absolutely necessary to ensure the safety of a subject. If an investigator wants to alter some aspect of the study, then the IRB must review and approve an amendment or modification to the protocol. In a process known as continuing review, each study is to be re-reviewed at least once per year, and more frequently if the degree of risk warrants it. We found variation across medical centers and their affiliated universities in the implementation of VA regulations and policies involving protections for human subjects. At the eight sites we visited, we found noncompliance with VA regulations in four areas: (1) informed consent; (2) IRB review; (3) IRB membership, staff, and space; and (4) IRB documentation. The problems we identified are similar to problems that OHRP noted in letters to universities and hospitals it has found to be out of compliance with federal regulations. As shown in fig. 2, some sites we visited had more problems than did others. Of the sites we visited, those with the most extensive violations of VA regulations relied on VA-run IRBs. We identified fewer problems at the IRBs in our sample that were run by universities. In particular, we observed fewer problems with IRB membership, staff, space, and documentation at university-run IRBs than at VA-run IRBs. University-run IRBs were also more likely to conduct thorough and timely continuing reviews than VA-run IRBs. University-run IRBs we visited were not without problems, however. We found that some IRB-approved consent forms at each site omitted required information and some investigators used nonapproved consent forms. We found problems with the content or use of informed consent forms at all of the medical centers we visited. We found that some informed consent documents that had been approved for use by IRBs provided incomplete or unclear information. In addition, we found some studies in which the investigators used nonapproved consent forms when enrolling subjects. We also found one instance in which research was conducted without consent. Informed consent is a primary ethical requirement of research with human subjects and reflects the principle of respect for persons. The ability of competent subjects to make informed decisions about whether to participate in research and the ability of legally authorized representatives to protect those who are unable to provide consent because they are incapacitated are undermined when IRBs fail to ensure that all required information is included in consent forms or when investigators fail to obtain consent using approved procedures. We found that 60 percent of the 138 IRB-approved consent forms that we randomly sampled from lists of active projects provided incomplete or unclear information about required elements of informed consent. (Fig. 3 lists the elements of informed consent required by VA regulations.) Each IRB we visited approved some consent forms that contained incomplete information. For example, IRB-approved consent forms did not indicate that blood would be drawn in a study on the effects of exposure mention possible risks of a biopsy in a study designed to test a treatment describe alternative treatment options in a study comparing two drug treatments for schizophrenia, and indicate who would have access to data obtained during a study on treatment for cirrhosis of the liver. Of the 84 IRB-approved consent forms we identified that omitted required elements or provided incomplete information, almost half did so for two or more required elements. For example, the consent form for a study of treatments to reduce the recurrence of melanoma did not provide clear information about the duration of the study, nor did it state whom to contact for information about research subjects’ rights. Participants were also told that data would continue to be obtained from their medical records even if they withdrew from the study. Thus, the consent document for this study provided incomplete information about two required elements and appeared to negate the subject’s right to withdraw from the study at any time. Moreover, this consent form might have created undue influence because it inappropriately suggested that the subjects’ own physician endorsed the potential benefits to the subject of participating in this study. Because the participants in this study are randomly assigned to receive either an unproven treatment or no treatment, the physician would have no way of knowing whether participation would benefit the subject. VA regulations allow an IRB to approve a consent procedure that alters or omits one or more of the required elements of consent if it finds and documents certain conditions. We were unable to find such documentation in the cases we reviewed. Moreover, 37 of the IRB-approved consent forms that omitted or provided incomplete information about a required element were for studies that involved investigational drugs or devices. Thus, both VA and FDA regulations had to be met, and when informed consent is required, FDA regulations do not permit IRBs to alter or omit any required elements of informed consent. The information that was omitted most frequently—in about 15 percent of forms was the person to whom subjects should direct questions about their rights as research subjects. This information, which is required by regulations, is not included in the standard template for informed consent that VA policy requires investigators to use. Sites varied in the number of IRB-approved forms that provided incomplete information and the number of incomplete or absent elements in approved forms. The percent of approved consent forms with incomplete information ranged from 78 to 100 percent of our sample at the four sites with the greatest number of these problems. Moreover, forms from these four sites often provided incomplete descriptions of two or more required elements of informed consent. As many as four elements of informed consent were missing or incomplete in IRB-approved forms at these sites. At the two sites where we found the fewest problems, about three-fourths of our sample of approved consent forms were problem-free, and multiple problems in the same form were rare. In addition to information required by VA regulations, VA policy also requires that informed consent forms indicate that VA will provide free medical treatment for research-related injuries.We found that about 30 percent of the IRB-approved consent documents we reviewed did not include this statement. The absence of this statement varied by site. (These data are not included in fig. 2, which presents noncompliance with VA regulations.) The majority of forms we sampled at two university-run IRBs did not include this information, and one VA-run IRB included it only about half the time. In contrast, the forms at the other university-run IRBs and at the four other VA-run IRBs almost always included it. The requirement for informed consent was waived for eight of the projects we reviewed, and in each case, our review indicated that the study qualified for the waiver. According to VA regulations, certain categories of research directly or indirectly, to specific individuals do not require informed for example, studies of existing data that cannot be linked, consent or IRB approval. VA regulations also allow for a waiver of informed consent in some research that is not eligible for an exemption from IRB review, provided that the IRB determines that certain conditions apply. Although all the consent forms we obtained from investigators indicated that consent to participate in research had been obtained, we found that investigators did not always obtain consent appropriately. In this review of consent forms, we found 18 studies in which the investigators used nonapproved consent forms when enrolling subjects. We also separately identified one instance in which research was conducted without consent. We asked investigators at each site to show us signed consent forms from a randomly selected sample of their subjects. We examined 540 such consent forms, all of which had the signature of a subject or a surrogate. In addition to determining that investigators were able to produce these signed consent forms, at four sites we also compared these signed forms with consent forms that IRBs had approved for use in these studies. We found that investigators had used nonapproved consent forms with one or more subjects in 18 of the 73 studies we examined. A total of 33 of 292 subjects had signed nonapproved consent forms. The extent of this problem varied by site. We found that one or more subjects had signed a nonapproved form in 12 to 33 percent of the studies we examined at these four sites. Some of the nonapproved forms that were signed by subjects omitted key information that had been included in the IRB-approved version of the consent form. For example, the nonapproved form that had been signed by all four subjects enrolled in a study on treatments for lymphoma did not mention that the study would involve multiple bone marrow biopsies, the possible risks of those biopsies, or possible side effects of two drugs used information that was included in the IRB-approved consent form. We identified one instance in which research procedures were performed without consent in the projects in our sample. In this instance, a patient who had not given consent was subjected to an esophageal biopsy for research purposes. This biopsy, which was not reported to the IRB, occurred in conjunction with a biopsy performed for diagnostic purposes in November 1997. We also found that investigators or their staff had not fully complied with requirements for obtaining consent in three other studies in our sample. In each of these, subjects had consented and steps were implemented to address the problem. In October 1998, an investigator learned that a subject with schizophrenia did not understand his right to withdraw from research at any time. Upon discovering this, the investigator fired the person who had obtained the subject’s consent, withdrew the subject from the study, and reported the incident to the IRB. In May 1997, FDA discovered that the consent form signed by subjects in a study of an investigational device to facilitate walking among paraplegics had not included all the necessary information about their participation. The problem was reported to the IRB, the consent form was rewritten, and three previously enrolled subjects were given a revised form and a chance to withdraw from participation. In July 1997, an investigator realized that he did not have IRB approval for the protocol and consent form that had been used for 73 subjects care providers who had completed a questionnaire to assess decision- including schizophrenics, their family members, and health making. The investigator reported the situation to the IRB, which required that subjects be given a revised approved consent form. We found one other problem in subject enrollment procedures used by an investigator, although in this case VA regulations were not involved. One subject who was incapacitated as a result of dementia was in a noninvasive study of abdominal aneurysms. Although the subject’s surrogate had provided consent,VA policy establishes protections for incapacitated subjects by prohibiting their enrollment in research that can be conducted with competent subjects. We encountered eight other cases in which surrogates enrolled incapacitated subjects in research, but we were unable to determine whether these cases were in accordance with VA’s policy. We found that five of the sites we visited did not implement certain required procedures for IRB review of research. For example, we found that studies at two sites were not reviewed by all necessary IRB members and four IRBs did not ensure timely or thorough continuing review of ongoing research. We found that two IRBs did not comply with VA regulations that research must be approved during properly convened meetings, either because meetings were held without a quorum or because the IRB chair improperly approved a high-risk study outside an IRB meeting. With the exception of certain categories of research involving minimal risk to subjects, VA regulations require IRBs to review research at convened meetings attended by a quorum, defined as a majority of members that includes at least one member whose primary concerns are in nonscientific areas. These regulations establish criteria for IRB meeting quorums to ensure that decisions about the protection of human participants in research reflect the consideration of diverse perspectives on research, including the views of scientists and nonscientists with a range of experience and expertise. These protections are undermined when initial review is not conducted in accordance with these requirements. Four of seven meetings held by one VA-run IRB between January 1998 and August 1999 were held without a quorum. As a result, 17 studies were initiated without legitimate IRB approval, including studies on new drug treatments for unstable coronary symptoms and pneumonia. We examined four to six sets of minutes from IRB meetings held at the other seven IRBs we visited and found that a quorum was present at each. We found one other instance in which requirements for approval of research at convened IRB meetings were violated. A university-run IRB considered a high-risk drug study for cardiac patients and determined that re-review would be necessary after the investigator addressed several concerns. IRB minutes stated that because the drug company sponsoring the research would have rejected their site if a time deadline were not met, the IRB chair approved the study before the IRB reconsidered it. Although there are circumstances under which an IRB chair can approve a study, in all such cases the research must have been found to pose only minimal risk to subjects. In this instance, the IRB had determined that the study posed a high degree of risk. On the other hand, our sample also included 16 other studies that met criteria for approval outside a convened IRB meeting. VA regulations allow such a procedure (called expedited review) for studies that pose only minimal risk to subjects and that fall into one of several categories of research. Under expedited review procedures, the IRB chair, or one or more experienced IRB members designated by the chair, are authorized to approve research. For example, IRB approval was expedited for a study on the effects of a weight loss program in which subjects would attend informational sessions about diet and weight loss and have their weight and health monitored using routine, minimal-risk procedures. We found that the IRBs we visited differed in the sufficiency of the written information they asked investigators to provide about human subject protections prior to review. VA regulations identify eight criteria that IRBs must assess before approving research (see fig. 4). Although VA regulations do not specify the information IRBs must review to assess these criteria upon initial review, much of the information can only be provided by investigators. Because offsite study sponsors often prepare the consent forms and protocols used in multisite studies, IRBs must have sufficient information to assess whether the local investigator can properly implement human subject protections. We found that information in IRB files did not always address all the criteria that must be satisfied for an IRB to approve a study. Of the sites we visited, only one university-run IRB routinely requested detailed information from local investigators about each criterion in its application forms. For example, two IRBs did not routinely ask local investigators any questions about risks or about plans for monitoring the safety of subjects. Similarly, IRBs differed in the information they had from investigators about special protections for subjects who are likely to be vulnerable to coercion or undue influence. VA regulations require that IRBs ensure that additional safeguards are in place to protect the rights and welfare of such subjects; however, the regulations do not specify the nature of such safeguards. We analyzed project files for 27 studies designed to address issues involving psychiatric conditions that can be associated with a diminished capacity for decision-making psychoses, mood disorders, and organic mental disorders such as dementia. We found that the investigator had included information about additional safeguards in applications for IRB approval in only about half of these studies. For example, we reviewed from two to six files for projects involving potentially vulnerable subjects at each site and found references to additional protections in most of the relevant project files we sampled at four sites. In contrast, at two other locations no such documentation was evident in any of the IRB files we reviewed for projects involving subjects with psychiatric disorders that could affect decision-making. Some sites have implemented procedures that afford special protections for some such subjects. Examples follow. Subjects at one medical center who are recruited for psychiatric research and whose mental illness can affect decision-making are typically tested for their comprehension of central consent issues before enrollment in a study. At another medical center, seriously mentally ill subjects who participate in studies involving a risk that their symptoms might worsen are monitored by a physician who is independent of the research and who is assigned responsibility for deciding whether the subject should remain in the study or be withdrawn. Alzheimer’s researchers at a third site have established research registries for potential subjects, who were still able to give consent, and their caregivers. By enrolling, subjects agree to allow medical information to be entered into a data bank and to be contacted about future studies. By agreeing to be contacted, however, potential subjects have not consented to participate in future studies. Because these potential subjects are recruited for future studies through registries, the risk of undue influence that occurs when physicians recruit their patients is minimized. Moreover, rules for these registries limit the number of researchers who may contact each person, ensure that potential subjects are recruited only for studies for which they are in fact eligible, and allow registry managers to conduct follow-up surveys to ensure that members of the registries are satisfied with the way researchers treat them. We found that three VA-run IRBs did not meet VA’s regulatory requirement that each study must be re-reviewed at intervals not to exceed 1 year.Regular re-review of a project and associated reports of problems allows an IRB to assess the ratio of risks to benefits on the basis of data obtained since the study began and to ensure that subjects are appropriately informed of those risks and benefits. We examined the dates of continuing review for 73 projects at 6 sites that had received initial approval more than 1 year before our visit. Of these projects, 54 (74 percent) had been reviewed on time within the past year. The median delay for the 19 projects that were not re-reviewed on time was about 1 month. At one VA-run site, only one of the nine projects we reviewed that were more than 1 year old had been re- reviewed on time. At another VA-run site, about half of the necessary continuing reviews from our sample were conducted within 1 year, but delays of up to 14 months occurred in the other half. The three university- run IRBs we visited achieved high rates of timely continuing review. Four VA-run IRBs we visited reviewed insufficient information when conducting continuing review. OHRP has stated that compliance with regulatory requirements for continuing review entails, at a minimum, IRB review of the study protocol and any amendments; the current consent form; the number of subjects who have been enrolled; and information relevant to risks, including adverse events, unanticipated problems involving risks to the subject or others, withdrawal of subjects from the study, complaints about the study, and a summary of any recent information relevant to risk assessment. Only half of the IRBs we visited required the investigator to submit the most recent version of the consent document or asked about subjects who have withdrawn (or been withdrawn) from the study. All eight IRBs required reports of the number of subjects who had participated and adverse events. IRB staff told us that reports of adverse events are difficult for IRBs to handle. Regulations require investigators to report to the IRB unanticipated problems involving risks to subjects, and IRBs must review adverse events reported by all sites where the study is being conducted. The concerns we heard on our site visits were similar to those described in several recent reports on difficulties that IRBs nationwide face when handling large numbers of adverse event reports in the absence of key information necessary for their interpretation.For example, reports of adverse events from drug studies do not indicate whether the subject who experienced the adverse event had received an experimental drug or a different treatment, such as a placebo. Regulatory bodies such as FDA and OHRP and research sponsors such as the National Cancer Institute have recently argued that adverse event reports from studies involving many subjects are often best handled by special committees called data and safety monitoring boards. These boards are typically established by research sponsors and include statisticians and other scientists who analyze data collected during the course of a clinical trial to detect risks to subjects. A few of the IRBs we visited were attempting to develop systems to track adverse events. Even when a data and safety monitoring board has been established to analyze adverse event reports associated with a study, it is not required to report its findings to IRBs. In VA these boards, referred to as data monitoring boards, analyze only those adverse events reported in multicenter studies funded by VA through a program called Cooperative Studies. If results indicate that a study protocol or consent form must be modified, reports are released by the coordinating center for that cooperative study. It sends such reports to investigators and to the associate chiefs of staff for research and development at participating medical centers, with instructions to share the information with IRBs. Reports are not submitted to IRBs directly. Similarly, VA’s policy manual does not require that reports from data and safety monitoring boards associated with non-VA-funded research be submitted to its IRBs or medical centers. VA’s policy manual also does not require investigators or IRBs to ascertain whether a data and safety monitoring board has been established for studies in which its investigators participate. IRBs at the eight facilities we visited met certain membership requirements, but two did not ensure that their members had no potential conflicts of interest. We also found problems involving the number of IRB staff or IRB space at five facilities. VA regulations require that IRBs have sufficient administrative staff and space to review research and preserve the confidentiality of files. VA regulations for IRB membership include requirements that IRBs have at least five members and must include a scientist, a nonscientist, and at least one person who is not otherwise affiliated with the institution. (Individual members may fulfill more than one criterion.) We checked IRB membership rosters from the eight facilities we visited and found that all met these requirements. In addition, VA regulations state that if the IRB regularly reviews research involving a vulnerable category of subjects, then consideration should be given to including at least one member who has experience working with that group. Each of the eight IRBs we visited included someone from the institution’s psychiatry, psychology, or other mental health department, allowing access to specialized expertise with regard to the potential vulnerabilities of mentally ill subjects. We also found that each of the university-run IRBs we visited had members who were on staff at the affiliated VA medical center. Inclusion of VA staff helps fulfill VA’s regulatory requirement that IRBs have knowledge of the local research institution, including the scope of research activities, types of subjects likely to be involved, and the size and complexity of the institution. Officials at the medical centers we visited that relied on the IRBs of university affiliates reported that the larger academic community of the university offered advantages for IRB membership, including a broader range of expertise and reduced potential for conflicts of interest because IRB members would be less likely to be research colleagues of investigators. In addition, because all VA investigators at these three medical centers also held faculty appointments at the university, investigators did not need to apply for IRB approval from both the university and VA. Officials at some of the medical centers that operated their own IRBs reported that the advantages of doing so included maintaining greater control over the research review process and the increased likelihood that the IRB would know particular investigators and veteran subjects. We found that two VA-run IRBs did not ensure that their members had no potential conflicts of interest. VA regulations state that no IRB may have a member participate in an IRB initial or continuing review of any project in which that member has a conflict of interest. Although we found that investigators who were IRB members appropriately abstained or recused themselves from voting on their projects, two IRBs had, as a voting member, the associate chief of staff for research and development for their medical centers. The duties of a VA medical center’s associate chief of staff for research and development include helping local investigators obtain intramural or extramural research funds. As noted by OHRP, such institutional officials thus have a potential conflict of interest in conducting IRB reviews. These two officials told us, however, that they believed their objectivity as IRB members was not compromised by their other responsibilities. Officials at four of the VA-run IRBs told us that they did not have adequate staff to support IRB operations, as required by VA regulations. IRB administrative staff provide crucial services such as reviewing applications for completeness, corresponding with investigators, and maintaining IRB records. In addition, some administrative staff serve on IRBs as experts on regulatory issues. The VA-run IRBs we visited typically had one or two IRB staff members who often had other responsibilities. For example, at one of these sites, where a single staff person worked part-time for an IRB that reviews 200 to 300 projects annually, the IRB chair reported that IRB activities, such as suggesting revisions to consent forms, were curtailed due to insufficient staff support. In May 2000, VA headquarters distributed preliminary estimates for the number of administrative IRB staff that a medical center should have. This guidance noted that staffing levels would vary with the breadth and complexity of the research program. ORD officials acknowledged that these benchmarks are a first approximation in an effort to identify appropriate staffing levels. In addition to staff, IRBs must have secure, private areas for the review and discussion of confidential materials. IRBs also need office space for the IRB chair and administrative staff, secure file storage, and computer support. We found that IRB administrative staff at three sites—two of them lacked sufficient space to conduct their work or store all IRB documents. For example, we observed IRB file folders stacked loosely on top of file cabinets and on floors at one of these sites. Six of the eight IRBs we visited did not maintain all the records required by VA regulations. Inadequate documentation does not, in itself, place subjects at risk. However, records of actions, deliberations, and procedures can help identify problems and corrective actions. Thus, documentary failures prevent appropriate monitoring and oversight activities. We found inadequate documentation in IRB files for about 9 percent of the ongoing projects we reviewed. For example, some files failed to include copies of all correspondence regarding IRB actions between the IRB and investigators, or copies of all approved consent forms. VA regulations require IRBs to retain these documents for at least 3 years after a study is terminated. Required documents were missing from one or more IRB files at five of the eight sites we visited. VA regulations require each facility to maintain written procedures that it will follow for conducting initial and continuing review, reporting IRB findings and actions to investigators and appropriate officials, and determining when special steps are necessary to monitor ongoing projects. Our review indicated wide differences between facilities in the adequacy of these documents. One VA-run facility has written procedures regarding criteria for exemption from IRB review and for use of expedited review procedures that are not in accordance with VA regulations. In addition, one medical center had been cited by the FDA for failure to have adequate written procedures in June 1999. The center agreed to have them in place by August 1999 but did not do so until December 1999. The written procedures available from three other VA-run IRBs did not include required descriptions of procedures for conducting project review, determining when additional monitoring of projects is necessary, or responding to investigator noncompliance. In contrast, the written procedures of the three university-run IRBs included all required procedures. We found one instance in which failure to have required written policies resulted in a further violation of VA regulations. Specifically, the previously discussed esophageal biopsy, which was conducted without consent, was not reported to the IRB or OHRP as required. VA regulations require institutions to ensure that “serious or continuing noncompliance”by investigators is reported to the IRB. A similar report must be filed with OHRP if the institution has an HHS-approved assurance, as did the medical center involved. The Associate Chief of Staff for Research and Development told us that he did not report the event to the IRB or OHRP because he followed the procedures for handling scientific misconduct outlined in VA’s policy manual. Nothing in the IRB’s project files for that investigator indicated a finding or report of noncompliance, imposition of any special restrictions or conditions for future research, or suspension or termination of research. We found that some IRB minutes did not comply with VA regulations, which require the minutes to include a record of actions, the basis for requiring changes in or disapproving research, and a written summary of discussions of controverted issues and their resolution. At each site, we reviewed from four to seven sets of minutes from IRB meetings held from December 1997 through October 1999. IRB actions were almost always clearly recorded in the minutes we examined at each site. Minutes from six facilities routinely included written summaries of discussions and reasons for actions. Two VA-run IRBs, however, rarely included substantive discussions of these matters in their minutes. Facilities also varied in their compliance with VA regulations about recording votes by IRB members during project review. The regulations state that minutes of IRB meetings must indicate the number of members voting for and against and the number of those abstaining. Two VA-run IRBs typically recorded votes as unanimous, and minutes from one other VA-run IRB recorded some votes as “approved,” without specifying vote totals. Without exact numbers, the presence of a majority of IRB members required during each vote cannot be confirmed. The voting records in minutes from the remaining IRBs we visited were generally in compliance with regulations. However, in one set of minutes from one site, we found that the total number of votes cast for each decision consistently exceeded the number of members listed in attendance. We identified three specific weaknesses in VA’s system for protecting human subjects: not ensuring that research staff have appropriate guidance, insufficient monitoring and oversight activity, and not ensuring that the necessary funds for human subject protections are provided. These weaknesses indicate that human subject protection issues have not historically received adequate attention from VA headquarters. VA headquarters has not provided the guidance necessary to ensure that its medical center staff are adequately informed about requirements for the protection of human research subjects. We found that VA did not develop a systemwide educational program, ensure that each of its facilities had an appropriate training program in place, or provide guidance about training to its facilities. We also found problems with the guidance VA provides about procedures for handling informed consent records. Efforts to protect the rights and welfare of human subjects are undermined when research staff have not been given clear, comprehensive guidance about human subject protections. VA headquarters officials told us that VA did not have a systemwide educational program devoted to human subject protection issues and that more training is needed. We found that three of the medical centers we visited had no educational program for IRB members, IRB staff, or investigators. From its October 1999 survey of VA field management, VA headquarters research officials learned that 12 of 22 Veterans Integrated Service Networksdid not have an adequate plan for the ongoing education of IRB members, IRB administrative staff, or investigators about the regulatory requirements for protecting human subjects. In particular, medical centers with small research programs identified difficulties in establishing educational programs. Those facilities that had programs often reported that their university affiliates ran the training programs. A need for increased educational guidance from headquarters was one of the most commonly identified issues regarding human subject protections in the survey. OHRP and HHS’s Office of Inspector General have stressed that educational programs are critical to ensuring that IRBs comply with regulations and are able to assess the acceptability of research proposals in light of those regulations and to ensuring that investigators understand their responsibilities to protect human subjects. On the other hand, two VA-run IRBs and the three university-run IRBs we visited have implemented their own educational programs for both investigators and IRB members and staff, generally without guidance from headquarters. These programs included training new IRB members, devoting a portion of IRB meetings to discussion of issues involving the protection of human subjects, having some IRB members and staff attend national conferences about IRB operations, and instituting a certification program for investigators. Although we did not evaluate the adequacy of these programs, one of these sites, a university affiliate, developed an educational program that has been cited by HHS’s Office of Inspector General as a best practice for training in human subject protection issues. In addition to finding that VA did not have a systemwide educational program, we found problems with VA guidance for documenting consent to participate in research. VA’s policy manual includes two requirements that go beyond its regulations for the protection of human subjects: (1) the original signed consent form is to be placed in the subject’s medical record and (2) investigators are to use a standard template developed by VA to obtain consent. A VA official in ORD told us that the purpose of requiring the placement of signed research consent forms in medical records is to ensure that treating professionals are aware of relevant medical information. He acknowledged, however, that consent forms in medical records are not always readily accessible to treatment staff because they may be housed in old volumes of medical records maintained in storage areas. He also noted that medical records personnel at some VA medical centers have discarded consent forms rather than filing them. Our findings confirmed this. We were unable to locate consent forms in 20 percent of 187 medical records we reviewed at 7 of the 8 medical centers we visited. The remaining medical center we visited recently developed a system for scanning signed consent documents into its electronic medical records. However, these consent forms were not located in a part of the electronic record that would be routinely accessed by treating personnel. Some medical center research staff suggested that placing a synopsis of each study in a prominent place within subjects’ medical records would ensure that treating professionals know about relevant research participation, thus minimizing risks to subjects. We observed such a strategy at the Denver VA Medical Center, where a special flag in each subject’s electronic medical record links the reader to a brief summary of the study and to any investigational drugs involved. VA has not implemented a systemwide procedure for indicating research involvement in electronic medical records. Another area of concern is VA’s standard template for informed consent. This template includes space for investigators to enter study-specific information and exact language for requirements common to all consent forms. VA’s policy manual requires all VA investigators to use this form. We identified several problems with this template. The template does not reflect the regulatory requirement that a contact be provided for subjects to call with questions about their rights as research participants. For studies conducted at both VA and non-VA locations, use of the VA template created problems. In these cases, adherence to VA’s policy requires development and IRB approval of two consent forms—one based on VA’s template and one for the other location. Failure to use an appropriate IRB-approved consent form in these dual-form studies was the reason subjects signed nonapproved forms in 10 of the 33 cases previously discussed. VA has not provided clear guidance about the role of a witness to the consent process. Under VA regulations, a witness signature is needed only when the elements of informed consent have been presented orally. We found only 1 study in our sample of 146 in which consent was obtained orally. However, we found that 405 of the 540 signed consent forms we examined had been signed by a witness. OHRP guidance indicates that a witness to a subject’s consent to participate in research may be appropriate when aspects of the study create concerns about the enrollment process. In such cases, an independent witness can provide a valuable check on the consent process to certify, for example, that key information was properly conveyed and that subjects were not unduly coerced into participation. On the other hand, such a witness can represent an unnecessary intrusion into a potential subject’s privacy. VA’s consent template includes a line for the signature of a witness, without specifying who may serve as a witness, what the witness is attesting to, or the circumstances under which the witness is needed. Similarly, VA’s policy manual lacks guidance about who should serve as a witness or what that person’s role is. We found that VA did not have an effective system for monitoring protections of human subjects. Several instances follow. VA headquarters and affected medical centers were generally unaware of regulatory investigations and impending actions by OHRP or FDA against university-run IRBs until after the regulatory sanctions were applied. VA was unable to ensure that FDA could notify VA of planned inspections and provide copies of post-inspection correspondence because VA was unable to provide FDA with a list of its university-run IRBs until July 2000. VA did not have a complete list of those medical centers that used their own IRBs, relied on a university-run IRB, or were covered by an OHRP assurance until July 2000. Until OHRP’s regulatory action against the West Los Angeles VA Medical Center, VA was unaware that each of its facilities was required to provide a written assurance that it will comply with all federal regulations regarding the protection of human subjects. Written assurances facilitate proper oversight by ensuring documentation of core agreements between VA headquarters and IRBs. They also can provide evidence of knowledge of the regulations governing human subject protections and demonstrate an institution’s commitment to those protections. When VA subsequently obtained these assurances, it did not require medical centers to submit local written procedures for implementing human subject protections, as the regulations required. Review of written procedures can indicate gaps or errors in required local policies and procedures. VA headquarters has not provided medical centers with guidance in ensuring access to minutes or other key information when they arrange for the services of a university-run IRB. As a result, one medical center we visited did not have access to the minutes of its university-run IRB, and two medical centers affected by regulatory sanctions against their affiliated universities had not monitored IRB minutes to assess compliance with regulations. Furthermore, we found that VA headquarters and medical centers we visited did not effectively monitor investigators and their studies. Specifically, only one of the eight medical centers we visited checked whether investigators provided subjects with the correct IRB-approved consent form. That medical center recently began checking one signed consent form from each study as part of its continuing review. In addition, the files of one university-run IRB we visited did not correctly identify which researchers at the VA medical center were responsible for the studies the IRB had approved because the medical center required that department chairs rather than researchers be listed as principal investigators. Responsibility for funding human subject protections at medical centers is diffused across several decisionmakers, each of whom may also have competing priorities for the same funds. As a result, no one official is responsible for ensuring that medical center research programs have the resources necessary to support IRB operations and provide training in human subject protections. Although VA has not determined the funding amounts needed for human subject protection activities at the medical centers, research officials at five of the eight medical centers we visited told us that they had insufficient funds to ensure adequate operation of their human subject protection systems. We found that medical centers typically relied on several sources of funds to support the indirect costs of research, which include human subject protection activities. These sources included VA’s research appropriation, VA’s medical care appropriation, and non-VA research sponsors such as NIH or pharmaceutical companies. Different decisionmakers control the funds potentially available to a medical center from these sources. The medical center’s associate chief of staff for research and development controls the portion of the research appropriation targeted for the indirect costs of research. The medical center’s director controls the portion of the medical care appropriation allocated for indirect costs of research. Funds from non-VA research sponsors are generally held by a medical center’s nonprofit research foundation and are controlled by its board of directors, which has discretion over their use. As a result, responsibility for ensuring that human subject protections are adequately funded at each medical center is diffused across several decisionmakers. In addition, the decisionmakers at some of the medical centers we visited told us that they did not allocate additional funds for human subject protection activities because they had to consider those needs against the competing priorities of research support and medical care delivery. Headquarters research officials confirmed that these organizational tensions have created a situation in which there is no clear focus of responsibility for funding human subject protection activities at medical centers. One of the indirect costs of operating an IRB is the time spent by IRB chairs and members meeting their IRB responsibilities. Headquarters research officials told us that providing release time for IRB chairs and members has been a long-standing problem. VA staff at the medical centers we visited conduct their IRB activity as a collateral duty. We were told that the time commitment for members, and particularly for IRB chairs, is significant. Chairs and members spend time reviewing protocols before meetings, corresponding with investigators, attending IRB meetings, and preparing and reviewing documentation. We were told that the lack of release time made it difficult to recruit and retain IRB chairs and members. We found one instance in which a university paid VA to subsidize the costs of covering the emergency room duties of a VA physician who chaired an IRB that VA used. In another instance, a research official at one medical center told us that IRB meetings are held in the evening and that the nonprofit foundation pays IRB members. This arrangement allows members to fulfill their primary VA obligations during the day without the collateral responsibility of serving on the IRB. Research officials at five of the eight medical centers we visited reported that they had insufficient funds to ensure adequate operation of their human subject protection systems. Of particular concern, officials told us, was that lack of funds prevented hiring and training staff. Officials from some medical centers also told us that their nonprofit research foundations recognized that the level of VA funding for IRB operations was inadequate, and therefore contributed varying amounts of funds for specific local needs, such as training investigators in human subject protections or hiring IRB staff. For example, one nonprofit contributed $25,000 in fiscal year 2000 to support investigator training in human subject protections. Some VA nonprofit foundations and universities are charging private industry sponsors a fee for IRB review of their projects to help support IRB operations. However, headquarters research officials told us that VA has not determined the funding amounts needed for human subject protection activities at the medical centers. They said that such a determination is necessary for planning funding levels and ensuring that human subject protection activities are appropriately funded. Substantial corrective actions have been implemented at three medical centers in response to sanctions by regulatory agencies against their human research programs. These steps represent progress in meeting the requirements imposed by regulators and VA management, and each of the facilities, despite some difficulties, has resumed human research activities. VA has, however, been slow to identify systemwide deficiencies and to obtain information needed to step up oversight of human subject protection systems at its medical centers. Nonetheless, VA’s recent responses, such as establishment of the Office of Research Compliance and Assurance (ORCA) to monitor human subject protections at individual medical centers and across the system, are promising. The three medical centers and their affiliated universities we visited that had actions taken against them by regulators—West Los Angeles, Chicago Westside, and Denver—have made progress in implementing substantial changes to their human subject protection systems. Their written procedures appear to be in compliance with regulations, and their staffing levels seem reasonable for the workload. These medical centers and their affiliated universities, along with two others, had been affected by serious regulatory sanctions. Regulators found numerous problems at these institutions, including failure to obtain informed consent, failure to conduct adequate and timely continuing review of research, and failure to have adequate written IRB policies and procedures. OHRP deactivated West Los Angeles VA Medical Center’s multiple project assurance with HHS on March 22, 1999. It restricted the assurance held by the University of Illinois at Chicago, which served as the IRB of record for Chicago Westside VA Medical Center on August 27, 1999. On September 13, 1999, FDA suspended certain research projects at a consortium of six Colorado research institutions, including the Denver VA Medical Center. The University of Colorado, the location of the consortium’s IRB, suspended research with human subjects at all six sites in response to a letter from OHRP dated September 22, 1999, which raised concerns about IRB noncompliance with regulations. On December 17, 1999, OHRP restricted the multiple project assurance with Virginia Commonwealth University, which had been the IRB of record for the Richmond VA Medical Center. FDA had issued a warning letter to the university several months earlier about the IRB operations. On January 19, 2000, OHRP restricted the multiple project assurance with the University of Alabama at Birmingham, which was the IRB of record for the Birmingham VA Medical Center. There were three immediate responses in West Los Angeles, Chicago, and Denver to the sanctions imposed by regulatory agencies: a suspension of enrollment of new subjects in almost all research projects; an assessment of the appropriateness of the continued participation of previously enrolled subjects; and a determination by VA headquarters and affiliated universities of actions needed to improve human subject protection programs at each site. Each medical center or affiliated university that we visited then made extensive changes to its human subject protection system. These changes involved reconstituting IRBs; increasing the number of IRB administrative staff; training IRB members, staff, and investigators in the principles and procedures of human subject protection; creating or extensively revising IRB procedures; increasing working space for IRB operations; creating new databases for tracking protocols through the review process; re-reviewing projects; and resuming research activities. As of February 2000, all projects at the West Los Angeles VA Medical Center had been re-reviewed by an IRB. As of June 2000, all projects for the Chicago Westside VA Medical Center had been submitted to university-run IRBs for re-review, and as of July 2000, all projects had been re-reviewed for the Denver VA Medical Center. The Denver VA Medical Center’s IRB has been informed by OHRP and FDA that as of June 2000, its corrective actions are appropriate. On July 18, 2000, OHRP removed the restriction on the University of Illinois at Chicago stating that the university has developed and implemented an improved system for the protection of human subjects in research and has adequately completed all required actions. Responses varied across sites, however, because of differing responsibilities for IRB operations and site-specific problems that needed to be addressed. For example, at the West Los Angeles VA Medical Center, which operated its own IRB, VA headquarters and medical center officials made extensive changes in research personnel responsible for human subject protections. From April 1999 to the time of our visit in March 2000, about 50 employees had been rotated through the program with a few assigned full-time to support research and development and IRB operations. The university affiliated with the Chicago Westside VA Medical Center hired a nationally known expert in human subject protections to lead a comprehensive restructuring of its IRB operations. We identified two issues of concern at the West Los Angeles VA Medical Center. First, VA’s authorization of a resumption of IRB operations at West Los Angeles on April 19, 1999—less than 1 month after OHRP’s deactivation of its multiple project assurance—was premature. At that time, the medical center still lacked approved, written procedures for operation. Such procedures are required by regulations. It also was relying on untrained administrative staff to assist the newly formed IRBs. Furthermore, VA’s investigators had not been trained in human subject protection issues. Our second issue of concern is that officials at the West Los Angeles VA Medical Center were particularly slow to respond to OHRP’s requirements. In its 1999 letter deactivating the medical center’s multiple project assurance, OHRP noted the medical center’s continued lack of responsiveness to issues raised by OHRP over a 5-year period. For example, in 1994, OHRP required that the medical center establish a data and safety monitoring board to oversee studies involving subjects with severe psychiatric disorders. It took until February 2000 for medical center officials to approve standard operating procedures for the data and safety monitoring board and to hire its staff.In another instance, OHRP cited the medical center in 1995 for a lack of adequate written procedures for human subject protections. However, it took the medical center until February 2000 to develop and approve these procedures. Similarly, in 1995, OHRP strongly recommended that medical center officials develop an ongoing training program for investigators. Medical center officials told us they plan to begin such training in September 2000. At the Chicago Westside VA Medical Center, we found that, in permitting the continued participation of previously enrolled subjects in some projects, VA and the university-run IRBs did not ensure that continuing review requirements were met for these projects. When we raised this issue with officials during our February 2000 visit, they acknowledged this lack of oversight. They have since required investigators for these projects to submit materials for continuing review. We found that the Chicago Westside VA Medical Center did not play an active role in assisting its university-run IRBs to improve its human subject protection system. The medical center organizational chart for research and development did not show any linkage with the three university IRBs. The medical center had only one representative among the 18 members of the biomedical IRB and one on the 17-member combined biomedical- behavioral IRB. There were no VA representatives on the third IRB, an IRB that reviewed behavioral studies because, as officials told us, VA conducted few such studies. At the time of our visit to the medical center—over 5 months after the OHRP action—the medical center had done little to improve its communication with the IRBs despite the recommendation to do so made by the VA headquarters site visit team in September 1999. Although one local VA research official participated on a university committee charged with prioritizing studies for re-review and made suggestions to modify the IRB form used by investigators to submit protocols for review, the medical center had not established a mechanism for routine contact with and monitoring of the IRBs. In addition, the medical center was unaware of VA protocols being submitted for IRB review, IRB actions to approve or disapprove continuation of studies, and serious adverse events that could affect veterans who were subjects of research. At the time of our visit, the medical center was unable to provide us with reliable data on which investigators had been trained by the university in human subject protection regulations and issues. Furthermore, as of July 2000, the medical center had not responded to a May 2000 request from the university for comments on their new IRB procedures manual. In contrast, the Denver VA Medical Center established mechanisms to enhance communication between the research and development program and its three university-run IRBs by having regular meetings and increasing the number of VA personnel on the IRBs. As of June 2000, the chair of one of the university-run IRBs and the co-chair of another were VA employees. Five other VA employees served as members of the IRBs. Medical center personnel were working closely with their counterparts in the university to design a database that would allow VA research officials access to VA project information at the university-run IRBs. When the IRBs at their affiliated universities faced sanctions by regulatory agencies, officials at the Richmond and Birmingham medical centers chose to establish their own IRBs. They told us they did so to increase their control over the research review process. These officials told us they each created an IRB, developed written procedures, trained IRB members, and resumed their research programs after re-reviewing their projects. In addition, the Birmingham VA Medical Center has trained investigators and IRB staff, and the Richmond VA Medical Center has trained research staff. VA has been slow to recognize and address systemwide deficiencies in its human subject protection activities. Although OHRP identified problems with human subject protections at the West Los Angeles VA Medical Center in 1994, VA did not have a plan to address systemwide concerns involving research until July 1998. VA did not begin to implement systemwide changes until after OHRP took regulatory action against the medical center in March 1999. VA’s initial responses to regulators’ actions affecting the West Los Angeles VA Medical Center and other medical centers were crisis- driven and site-specific. Specifically, headquarters formed teams that conducted site visits to determine actions needed at the affected medical centers. Headquarters monitored corrective actions at the medical centers primarily through an exchange of reports and correspondence. In July 1998, VA developed a plan to reorganize its field research operations. This plan addressed a variety of research concerns including the involvement of human subjects and the ethical conduct of studies. Only recently, however, has VA headquarters begun to implement systemwide changes to improve its human subject protections. Its steps have included providing information to investigators and research staff, obtaining information about medical centers’ research programs, and making organizational changes to enhance monitoring and oversight of research involving human subjects. These steps have been slowly implemented, but they provide a promising foundation for improvements to protections for human subjects in VA research. VA headquarters officials have taken several steps to provide information to VA investigators and local research staff about human subject protections. The initial information provided by ORD described issues at affected medical centers. It was not until October 1999 that ORD provided medical centers with specific actions that could be helpful in strengthening their human subject research programs. Starting with its May 1999 bimonthly conference call with associate chiefs of staff for research and development, ORD began discussing human subject protection issues in light of the March 1999 OHRP action against the West Los Angeles VA Medical Center. Also in May 1999, they began to plan a series of educational programs for investigators, IRB members, research administrators, and medical center directors focused on human subject protection issues. In October 1999, ORD held a nationwide videoconference in which OHRP and VA research officials discussed human subject protection issues and answered questions from VA staff. Also in October 1999, ORD began to list on its Web site human subject protection information available through OHRP and other organizations and distributed a summary of lessons learned from institutions that had been affected by recent sanctions by regulatory agencies. ORD officials told us they expect to complete a draft of a revised policy manual for VA research by September 2000. ORCA officials have also implemented initiatives. For example, it began bimonthly teleconference calls in February 2000 with IRB and research officials at medical centers to share information and obtain input on human protection issues. In March 2000, ORCA issued its first newsletter to local research officials. This educational newsletter, planned as a twice a month series, will address informed consent and human subject protection issues. In April 2000, ORCA convened a group of VA research staff and outside experts in human subject protections to identify training courses developed elsewhere that VA could use. The group also plans to develop guidance and strategies for VA to use to train IRB staff, members, and investigators. Beginning in May 2000, ORCA sent the first of three notices to local research programs alerting them of current human subject protection concerns. In June 2000, it began issuing a monthly set of news clippings on human subject protection issues. In 1999, VA’s National Center for Ethics sponsored a conference on ethics in research and issued related reports including a discussion of principles for researchers’ consideration on the principles guiding the ethical conduct of research involving participants with impaired capacity to consent. VA is participating in national efforts to develop policies and procedures for protecting these participants. VA headquarters officials have acknowledged that they lacked key information about research programs at medical centers. To obtain more accurate and complete information, they have taken several steps. Examples follow. In October 1998, VA research officials began to develop a new computerized data system to improve the comprehensiveness and accuracy of data about studies involving human subjects at VA medical centers. As of June 2000, development was still under way. In April 1999, VA asked its medical centers whether they operated their own IRB or relied on the IRB of an affiliated university. VA also asked whether assurances with OHRP were involved. ORCA finished verifying this information in July 2000. In October 1999, ORD sent a questionnaire to the director of each Veterans Integrated Service Network to assess the adequacy of staffing and support for human subject protections at the medical centers in each network. A lack of adequate resources was one of the three most common problems identified. Sixteen of the 22 networks reported inadequate IRB support, including staff, space, and equipment. Fourteen networks identified education as a priority issue and cited the need for educational opportunities and guidance documents. In May 2000, headquarters sent information to the networks on educational opportunities and made suggestions for the level of administrative staffing of IRBs. By February 2000, VA had accepted an assurance from each medical center conducting human research that it would comply with regulations for the protection of human subjects. In April 2000, VA’s Chief Financial Officer reported that VA would implement a system to allow for the explicit accounting of funds from the medical care appropriation that are used by medical centers to support the indirect costs of research. These steps are necessary to obtain key information about human subject research programs at medical centers. This information will allow headquarters officials to determine the additional steps that may be needed locally or systemwide to ensure compliance with regulations and the protection of human subjects. VA is implementing two organizational changes to enhance its monitoring and oversight of human research programs. The Under Secretary for Health announced these changes in April 1999, but as of August 2000, they had not been fully implemented. They are designed to allow routine onsite monitoring of research programs, thereby helping medical centers identify weaknesses and develop strategies to improve compliance with regulations and the protection of human subjects. Although promising in concept, it is too soon to determine whether the initiatives described below will fulfill their objectives. In April 1999, VA announced the creation of ORCA. VA did not begin staffing this office until it appointed the chief officer in December 1999. VA plans that ORCA will have eight headquarters staff by September 30, 2000, and four regional offices with four staff each by December 31, 2000. As of July 2000, VA had not completed its staffing of the headquarters component and had not filled any regional office positions. Although ORCA’s specific plans for monitoring medical center research activities were still under development in summer 2000, officials told us that they planned to conduct a site visit on a rotating basis to each medical center conducting human research. As of July 2000, ORCA officials told us they had not developed a specific schedule for conducting these visits, but they expect to do so when the regional offices are staffed. ORCA’s headquarters has a budget of $600,000 for fiscal year 2000 and $1.5 million for fiscal year 2001. The regional offices have a budget of $1.9 million for fiscal year 2000 and $2.3 million for fiscal year 2001. In August 2000, VA awarded a $5.8 million, 5-year contract for external accreditation of its IRBs. This contract requires the contractor to conduct a site visit every 3 years to each medical center conducting human research. The contractor is expected to review IRB performance and to assess its compliance with regulations. VA officials told us that VA expects that the university-run IRBs it uses will grant access to the accreditation team. VA is the first research organization to have an external accreditation of its human research programs. VA has not ensured that its medical centers have fully implemented required procedures for the protection of human subjects. Primary responsibility for implementation of these protections lies with local institutions medical centers and their IRBs. Although we cannot generalize from our sample to the universe of VA research institutions, we found sufficient evidence of noncompliance with applicable federal regulations to be concerned. We also found that incomplete access to information about adverse events experienced by research participants made it difficult for IRBs to fulfill their mandate. We found widespread weaknesses in the management of human subject protections that VA had not identified because of its low level of monitoring. VA’s past failure to ensure that its research facilities had the resources, including staff, training, and guidance, needed to accomplish their obligations suggests that headquarters has not given attention or sufficient priority to the protection of human subjects. Despite a 5-year record of problems at the West Los Angeles VA Medical Center, VA did not begin to implement systemwide improvements until OHRP took regulatory action against the medical center. VA’s initial actions were primarily crisis-driven and site-specific. Generally, appropriate corrective actions have now been implemented at each of the three medical centers we visited that were affected by regulatory sanctions. However, VA’s progress on systemwide improvements to its human subject protection system has been slow. VA only recently began to obtain the information it needs—such as identifying which medical centers use their own IRBs and which rely on university-run IRBs improvements. Some facilities we visited and projects we reviewed appeared to have reasonably strong protections for the rights and welfare of participants. VA’s recent efforts to improve its human subject protections systemwide and its commitment to developing an effective oversight and monitoring system are important steps toward ensuring that all VA facilities meet requirements, but it is too soon to determine how well these initiatives will fulfill their objectives. VA has a long history of important contributions to medical research, and it could set important precedents in improving human research protections. For example, VA is the first federal agency to take action to externally accredit its IRBs. Whether VA medical centers establish their own IRBs or work with university-run IRBs, VA needs to ensure that the IRBs have adequate resources, and VA must exercise its oversight authority if it is to know what guidance, preventive efforts, or corrective actions are needed. To strengthen VA’s protections for human subjects, we recommend that the Acting Secretary of Veterans Affairs direct the Under Secretary for Health to take immediate steps to ensure that VA medical centers, their IRBs— whether operated by VA or not—and VA investigators comply with all applicable regulations for the protection of human subjects by providing research staff with current, comprehensive, and clear guidance regarding protections for the rights and welfare of human research subjects; providing periodic training to investigators, IRB members, and IRB staff about research ethics and standards for protecting human subjects; developing a mechanism for handling adverse event reports to ensure that IRBs have the information they need to safeguard the rights and welfare of human research participants; expediting development of information needed to monitor local protection systems, investigators, and studies and to ensure that oversight activities are implemented; and determining the funding levels needed to support human subject protection activities at medical centers and ensuring an appropriate allocation of funds to support these activities. In written comments (see app. II) on a draft of this report, VA agreed with our findings and recommendations. VA said that initiatives it has already planned and implemented will provide a foundation for a national prototype in effective human subject protections. Although VA agreed that its implementation of a systematic approach to human subject protections has been slow to develop, it provided clarification regarding statements in the draft report that VA had not focused attention on systemwide weaknesses until after the March 1999 regulatory action at the West Los Angeles VA Medical Center. VA stated that planning for the establishment of regional offices for risk management and research compliance had begun almost 1 year earlier. We have modified the report accordingly. In concurring with our recommendations to provide research staff with current, comprehensive, and clear guidance and training about human subject protections, VA identified initiatives planned or under way to improve its guidance, disseminate the guidance, and train research staff in its use. These initiatives represent promising efforts. Whether VA’s plans for guidance and training are effective will depend upon implementation details. VA must ensure that its research staff have access to and receive current guidance and training to enable them to meet their obligations to protect the rights and welfare of human research subjects. VA agreed with our recommendation to improve adverse event reporting and said it has expanded the distribution of reports from its data monitoring boards to include all appropriate IRBs. VA has also indicated its intention to participate in governmentwide efforts to address this matter. These are important first steps in ensuring that IRBs have the information they need to safeguard the rights and welfare of human subjects. However, because the VA monitoring boards analyze only those adverse events reported in VA’s multicenter Cooperative Studies program, further efforts to address reports of adverse events from other studies are necessary. VA also concurred with our recommendation to improve monitoring and oversight of human subject protection activities and identified several activities it has planned or implemented, such as external accreditation of IRBs and establishment of performance measures related to human subject protections for medical center research officials. Oversight and monitoring are essential if VA is to know whether the procedures at its medical centers and affiliated universities comply with human subject protection regulations. Whether the actions VA plans to take in this area will be sufficient depends on how effectively they are implemented. Finally, VA concurred with our recommendation to determine the funding levels needed to support human subject protection activities at medical centers and then ensure an appropriate allocation of funds to support these activities. VA’s response notes that it has begun to account for medical center expenditures associated with research support—an important first step toward determining necessary funding levels. However, VA did not discuss how it would ensure that funds are appropriately allocated to human subject protection activities. As we noted, organizational tensions within VA have created a situation in which there is no clear focus of responsibility for funding such activities at medical centers. Until this is addressed, we are concerned that VA cannot ensure that human subject protections will be appropriately funded. VA officials also provided technical comments, which we incorporated where appropriate. We are sending this report to the Honorable Hershel W. Gober, Acting Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-7101 if you or your staff have any questions. An additional GAO contact and the names of other staff who made major contributions to this report are listed in app. III. Our objectives were to (1) assess the Department of Veterans Affairs’ (VA) implementation of human subject protections, (2) identify whether weaknesses exist in VA’s system for protecting human subjects, and (3) assess VA’s actions to improve human subject protections at those sites affected by sanctions imposed by regulatory agencies and throughout VA’s health care system. To achieve these objectives, we reviewed VA, Food and Drug Administration (FDA), and Department of Health and Human Services (HHS) regulations and VA policies for the protection of human subjects; interviewed VA research officials; visited selected VA medical centers to assess local implementation of these standards; and visited VA medical centers affected by research restrictions. We also interviewed officials from the Office for Human Research Protections (OHRP) and reviewed HHS guidance. We reviewed records of congressional hearings; reports on human subject protections, including those issued by the HHS Office of Inspector General, the Institute of Medicine, and the National Bioethics Advisory Commission; and the literature on the history of human subject protections. To assess VA’s implementation of human subject protections, we conducted site visits at eight VA medical centers: Atlanta, Ga.; Baltimore, Md.; Cleveland, Ohio; Dallas, Tex.; Louisville, Ky.; Providence, R.I.; Seattle, Wash.; and Washington, D.C. We selected sites to reflect major differences in VA research programs (see table 1). First, we selected medical centers that differed in the number of studies they conduct with human subjects. Second, we selected medical centers that differed in the institutions responsible for operating the committee tasked with reviewing each study to assess its protections for human subjects board (IRB). Third, we selected facilities that differed in the assurance arrangements they had with OHRP. Some institutions had filed a legally binding commitment to comply with federal regulations called a multiple project assurance with OHRP; other institutions had not. Our results from these eight medical centers cannot be generalized to other sites. At each site, we interviewed local research personnel, including the associate chief of staff for research and development, the IRB chair, and staff responsible for providing administrative support to the IRB. We attended an IRB meeting at six sites (Atlanta, Cleveland, Dallas, Providence, Seattle, and Washington, D.C). We also reviewed written procedures describing how the IRB and institution implement human subject protections and a sample of four to seven sets of IRB minutes from the last 2 years (December 1997 through October 1999) at each site. We randomly selected a sample of 15 to 22 projects at each site for detailed analysis. To ensure that our selection included research on potentially vulnerable participants, we oversampled studies designed to provide information about psychiatric conditions that can affect decision-making capacity, such as dementia, schizophrenia, and depression. Up to one- fourth of the studies we sampled at any one site were in this category. We examined IRB records for each project in our sample (146 in all, including 27 psychiatric studies). For the subset of 138 studies that required written consent, we reviewed the most recently approved consent form.To determine whether subjects had signed appropriate consent forms indicating willingness to participate in research and whether those forms were available as required, we examined about 5 signed consent forms maintained in investigators’ files from each of 125 studies. We also tried to obtain about two signed consent forms from each project in paper medical records. This sample included 98 projects. Some medical records could not be made readily available to us. For example, some medical records were at a different location during our visit. To assess corrective actions at VA medical centers in response to restrictions on their human research programs, we conducted 2-day visits to three facilities where human research was suspended Chicago Westside, Ill.; Denver, Co.; and West Los Angeles, Ca.Our site visit team included an expert in human subject protections under contract to us. For each of these sites, we examined the OHRP and FDA reports associated with the restriction of human research, action plans for resolving identified problems, documents regarding current human subject operations, and the status of the research program and human subject protections at the time of our visits (February 2000 and March 2000). We discussed these matters with medical center officials and officials from IRBs at affiliated universities when they were involved. In addition, we reviewed documents and interviewed officials from two other medical centers Birmingham, Ala., and Richmond, Va. These facilities were also affected when the IRBs of their affiliated universities were cited for noncompliance with federal regulations. Both have now established their own IRBs. We conducted our work between June 1999 and August 2000 in accordance with generally accepted government auditing standards. Cheryl Brand, Kristen Joan Anderson, Jacquelyn Clinton, Patricia Jones, and Janice Raynor also made key contributions to this report. In addition, Barry Bedrick and Julian Klazkin provided advice on legal issues, and Deborah Edwards provided advice on methodological issues. MedicalRecordsPrivacy:AccessNeededforHealthResearch,But OversightofPrivacyProtectionsIsLimited(GAO/HEHS-99-55). MedicalRecordsPrivacy:UsesandOversightofPatientInformationin Research(GAO/T-HEHS-99-70). ScientificResearch:ContinuedVigilanceCriticaltoProtectingHuman Subjects(GAO/T-HEHS-96-102). ScientificResearch:ContinuedVigilanceCriticaltoProtectingHuman Subjects(GAO/HEHS-96-72). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Pursuant to a congressional request, GAO reviewed rights and welfare of veterans who volunteer to participate in research at Department of Veterans Affairs (VA) and the effectiveness of its human subject protection system, focusing on: (1) VA's implementation of human subject protections; (2) whether weaknesses exist in VA's system for protecting human subjects; and (3) VA's actions to improve human subject protections at those sites affected by sanctions applied by regulatory agencies and throughout VA's health care system. GAO noted that: (1) VA has adopted a system of protections for human research subjects, but GAO found substantial problems with its implementation of these protections; (2) medical centers GAO visited did not comply with all regulations to protect the rights and welfare of research participants; (3) among problems GAO observed were failures to provide adequate information to subjects before they participated in research, inadequate reviews of proposed and ongoing research, insufficient staff and space for review boards, and incomplete documentation of review board activities; (4) GAO found relatively few problems at some sites that had stronger systems to protect human subjects, but GAO observed multiple problems at other sites; (5) although the results of GAO's visits to medical centers cannot be projected to VA as a whole, the extent of the problems GAO found strongly indicates that human subject protections at VA need to be strengthened; (6) three specific weaknesses have compromised VA's ability to protect human subjects in research; (7) VA headquarters has not provided medical center research staff with adequate guidance about human subject protections and thus has not ensured that research staff have all the information they need to protect the rights and welfare of human subjects; (8) insufficient monitoring and oversight of local human subject protections have permitted noncompliance with regulations to go undetected and uncorrected; (9) VA has not ensured that funds needed for human subject protections are allocated for that purpose at the medical centers, with officials at some medical centers reporting that they did not have sufficient resources to accomplish their mandated responsibilities; (10) to VA's credit, substantial corrective actions have been implemented at three medical centers in response to sanctions by regulatory agencies taken against their human research programs, but VA's systemwide efforts at improving protections have been slow to develop; (11) medical centers affected by sanctions have taken numerous steps to improve human subject protections; and (12) VA has, however, been slow to take action to identify any systemwide deficiencies and obtain necessary information about the human subject protection systems at its medical centers. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Established in 1943, Hanford produced plutonium for the world’s first nuclear device. At the time, little attention was given to the resulting by- products—massive amounts of radioactive and chemically hazardous waste—or how these by-products were to be permanently disposed of. About 46 different radioactive elements represent the majority of the radioactivity currently residing in Hanford’s tanks. Once Hanford tank waste is separated by the WTP waste treatment process, the high-level waste stream will contain more than 95 percent of the radioactivity but constitute less than 10 percent of the volume to be treated. The low- activity waste stream will contain less than 5 percent of the radioactivity but constitute over 90 percent of the volume. The tanks also contain large volumes of hazardous chemical waste, including various metal hydroxides, oxides, and carbonates. These hazardous chemicals are dangerous to human health and can cause medical disorders including cancer, and they can remain dangerous for thousands of years. Over the years, the waste contained in these tanks has settled; today it exists in the following four main forms or layers: Vapor: Gases produced from chemical reactions and radioactive decay occupies tank space above the waste. Liquid: Fluids (supernatant liquid) may float above a layer of settled solids or under a floating layer of crust; fluids may also seep into pore spaces or cavities of settled solids, crust, or sludge. Saltcake: Water-soluble compounds, such as sodium salts, can crystallize or solidify out of wastes to form a salt-like or crusty material. Sludge: Denser water-insoluble or solid components generally settle to the bottom of a tank to form a thick layer having the consistency similar to peanut butter. DOE’s cleanup, treatment, and disposal of radioactive and hazardous wastes are governed by a number of federal and state laws and implemented under the leadership of DOE’s Assistant Secretary for Environmental Management. Key laws include the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, and the Resource Conservation and Recovery Act of 1976, as amended. In addition, most of the cleanup activities at Hanford are carried out under the Hanford Federal Facility Agreement and Consent Order among DOE, the Washington State Department of Ecology, and EPA. Commonly called the Tri-Party Agreement, this accord was signed in May 1989 and has been amended a number of times since then to, among other things, establish additional enforceable milestones for certain WTP construction and tank waste retrieval activities. The agreement lays out a series of legally enforceable milestones for completing major activities in Hanford’s waste treatment and cleanup process. A variety of local and regional stakeholders, including county and local government agencies, citizen and advisory groups, and Native American tribes, also have long-standing interests in Hanford cleanup issues. These stakeholders make their views known through various public involvement processes, including site-specific advisory boards. DOE’s ORP administers Hanford’s radioactive liquid tank waste stabilization and disposition project including the construction of the WTP. The office has an annual budget of about $1 billion and a staff of 151 federal employees, of which 54 support the WTP project. Other cleanup projects at Hanford are administered by DOE’s Richland Operations Office. DOE has attempted and abandoned several different strategies to treat and dispose of Hanford’s tank wastes. In 1989, DOE’s initial strategy called for treating only part of the waste. Part of this effort involved renovating a World War II-era facility in which it planned to start waste treatment. DOE spent about $23 million on this project but discontinued it because of technical and environmental issues and stakeholder concerns that not all the waste would be treated. In 1991, DOE decided to treat waste from all 177 tanks. Under this strategy, DOE would have completed the treatment facility before other aspects of the waste treatment program were fully developed; however, the planned treatment facility would not have had sufficient capacity to treat all the waste in a time frame acceptable to EPA and the Washington State Department of Ecology. DOE spent about $418 million on this strategy. Beginning in 1995, DOE attempted to privatize tank waste cleanup. Under its privatization strategy, DOE planned to set a fixed price and pay the contractor for canisters and containers of stabilized tank waste that complied with contract specifications. If costs grew as a result of contractor performance problems, the contractor, not DOE, was to bear these cost increases. Any cost growth occurring as a result of changes directed by DOE was to result in an adjustment to the contract price and was to be borne by DOE. Under the privatization strategy, DOE’s contractor would build a demonstration facility to treat 10 percent of the waste volume and 25 percent of the radioactivity by 2018 and complete cleanup in 2028. However, because of dramatically escalating costs and concerns about contractor performance, DOE terminated the contract after spending about $300 million, mostly on plant design. Following our criticisms of DOE’s earlier privatization approach, DOE decided that a cost- reimbursement contract with incentive fees would be more appropriate than a fixed-price contract using a privatization approach for the Hanford project and would better motivate the contractor to control costs through incentive fees. In total, since 1989 when cleanup of the Hanford site began, DOE has spent over $16 billion to manage the waste and explore possible ways to treat and dispose of it. DOE’s current strategy for dealing with tank waste consists of the construction of a large plant—the WTP—to treat and prepare the waste for permanent disposal. Begun in 2000, the WTP project is over half completed and covers 65 acres and is described by DOE as the world’s largest radioactive waste treatment plant. As designed, the WTP project is to consist of three waste processing facilities, an analytical laboratory, and over 20 smaller supporting facilities to treat the waste and prepare it for permanent disposal. The three waste processing facilities are as follows (see fig. 2): Pretreatment Facility – This facility is to receive the waste from the tanks and separate it into high-level and low-activity components. This is the largest of the WTP facilities––expected to be 12-stories tall with a foundation the size of four football fields. High-Level Waste Facility – This facility is to receive the high-level waste from the pretreatment facility and immobilize it by mixing it with a glass-forming material, melting the mixture into glass, and pouring the vitrified waste into stainless-steel canisters to cool and harden. The canisters filled with high-level waste were initially intended to be permanently disposed of at a geological repository that was to be constructed at Yucca Mountain in Nevada. However, in 2010, DOE began taking steps to terminate the Yucca Mountain project and is now considering other final disposal options. In the meantime, high- level waste canisters will be stored at Hanford. Low-Activity Waste Facility – This facility is to receive the low-activity waste from the pretreatment facility and vitrify it. The containers of vitrified waste will then be permanently disposed of at another facility at Hanford known as the Integrated Disposal Facility. Constructing the WTP is a massive, highly complex, and technically challenging project. For example, according to Bechtel documents, the completed project will contain almost 270,000 cubic yards of concrete and nearly a million linear feet of piping. The project also involves developing first-of-a-kind nuclear waste mixing technologies that will need to operate for decades with perfect reliability because, as currently designed, once WTP begins operating, it will not be possible to access parts of the plant to conduct maintenance and repair of these technologies due to high radiation levels. Since the start of the project, DOE and Bechtel have identified hundreds of technical challenges that vary in their significance and potential negative impact, and significant technical challenges remain. Technical challenges are to be expected on a one-of-a-kind project of this size, and DOE and Bechtel have resolved many of them. However, because such challenges remain, DOE cannot be certain whether the WTP can be completed on schedule and, once completed, whether it will successfully operate as intended. Among others, the significant technical challenges DOE and Bechtel are trying to resolve include the following: Waste mixing—One function of the WTP will be to keep the waste uniformly mixed in tanks so it can be transported through the plant and to prevent the buildup of flammable hydrogen and fissile material that could inadvertently result in a nuclear criticality accident. The WTP project has been developing a technology known as “pulse jet mixers” that uses compressed air to mix the waste. Such devices have previously been used successfully in other materials mixing applications but have never been used for mixing wastes with high solid content like those to be treated at the WTP. In 2004 and again in 2006, we reported that Bechtel’s inability to successfully demonstrate waste mixing technologies was already leading to cost and schedule delays. Our 2004 report recommended that DOE and Bechtel resolve this issue before continuing with construction. DOE agreed with our recommendation, slowed construction on the pretreatment and high- level waste facilities and established a path forward that included larger-scale testing to address the mixing issue. In 2010, following further testing by Bechtel, DOE announced that mixing issues had been resolved and moved forward with construction. However, concerns about the pulse jet mixers’ ability to successfully ensure uniform mixing continued to be raised by the Safety Board, PNNL, and DOE engineering officials on site. As a result, in late 2011, DOE directed Bechtel to demonstrate that the mixers will work properly and meet the safety standards for the facility. According to DOE officials, no timeline for the completion of this testing has been set. Preventing erosion and corrosion of WTP components—Excessive erosion or corrosion of components such as mixing tanks and piping systems in the WTP is possible. Such excessive erosion and corrosion could be caused by potentially corrosive chemicals and large dense particles present in the waste that is to be treated. This excessive erosion and corrosion could result in the components’ failure and lead to disruptions of waste processing. Bechtel officials first raised concerns about erosion and corrosion of WTP components in 2001, and these concerns were echoed in 2006 by an independent expert review of the project. Following further testing, DOE project officials declared the issue closed in 2008. However, DOE and Bechtel engineers recently voiced concerns that erosion and corrosion of components is still a significant risk that has not been sufficiently addressed. Furthermore, in January 2012, the Safety Board reported that concerns about erosion in the facility had still not been addressed, and that further testing is required to resolve remaining uncertainties. Bechtel has agreed to do further work to resolve technical challenges surrounding erosion and corrosion of the facilities internal components; however, DOE and Bechtel have not yet agreed upon an overall plan and schedule to resolve this challenge. Preventing buildup of flammable hydrogen gas—Waste treatment activities in the WTP’s pretreatment and high-level waste facilities can result in the generation of hydrogen gas in the plant’s tanks and piping systems. The buildup of flammable gas in excess of safety limits could cause significant safety and operational problems. DOE and Bechtel have been aware of this challenge since 2002, and Bechtel formed an independent review team consisting of engineers and other experts in April 2010 to track and resolve the challenge. This team identified 35 technical issues that must be addressed before the hydrogen buildup challenge can be resolved. Bechtel has been working to address these issues. However, a 2011 DOE construction project review noted that, while Bechtel continues to make progress resolving these issues, the estimated schedule to resolve this challenge has slipped. According to DOE and Bechtel officials, Bechtel is still conducting analysis and is planning to complete the work to resolve this challenge by 2013. Incomplete understanding of waste—DOE does not have comprehensive data on the specific physical, radiological, and chemical properties of the waste in each underground waste tank at Hanford. In the absence of such data, DOE has established some parameters for the waste that are meant to estimate the range of waste that may go through the WTP in an effort to help the contractor design a facility that will be able to treat whatever waste––or combination of wastes—is ultimately brought into the WTP. In 2006, an independent review team stated that properly understanding the waste would be an essential key factor in designing an effective facility. In 2010, the Consortium for Risk Evaluation with Stakeholder Participation, PNNL, and the Safety Board reviewed the status of DOE’s plans to obtain comprehensive data on the characteristics of the waste, and each concluded that DOE and Bechtel did not have enough information about the waste and would therefore need to increase the range of possible wastes that the WTP is designed to treat in order to account for the uncertainty. Officials at PNNL reported that not having a large enough range is “a vulnerability that could lead to inadequate mixing and line plugging.” The Safety Board reported that obtaining representative samples of the waste is necessary to demonstrate that the WTP can be operated safely, but that DOE and its contractors have not been able to explain how those samples will be obtained. In its 2011 review of the WTP project, a DOE headquarters construction project review report notes that progress has been made on including additional information and uncertainties in the efforts to estimate and model the waste that will be fed to the WTP. However, DOE officials stated that more sampling of the waste is needed. An expert study is under way that will analyze the gap between what is known and what is needed to be known to design an effective facility. This study is expected to be completed in August 2014. The risks posed by these technical challenges are exacerbated because once the facility begins operating, certain areas within the WTP (particularly in the pretreatment and high-level waste facilities) will be permanently closed off to any human intervention in order to protect workers and the public from radioactive contamination. To shield plant workers from intense radiation that will occur during WTP operations, some processing tanks will be located in sealed compartments called “black cells.” These black cells are enclosed rooms where inspection, maintenance, repair, or replacement of equipment or components is extremely difficult because high radiation levels prevent access into them. As a result, plant equipment in black cells must last for WTP’s 40-year expected design life without maintenance. According to a recent review conducted by the DOE Inspector General, premature failure of these components could result in radiation exposure to workers, contaminate large portions of the WTP and/or interrupt waste processing for an unknown period. Significant failures of components installed in the WTP once operations begin could render the WTP unusable and unrepairable, wasting the billions of dollars invested in the WTP. In August 2012, DOE announced that it was asking a team of experts to examine the WTP’s capability to detect problems in the black cells and the plant’s ability to repair equipment in the black cells, if necessary. According to DOE officials, the team will, if needed, recommend design changes to improve the operational reliability of the black cells and the WTP. In addition, the Secretary of Energy has been actively engaged in the development of a new approach to managing WTP technical challenges and has assembled subject matter experts to assist in addressing the technical challenges confronting the WTP. The estimated cost to construct the WTP has almost tripled since the project’s inception in 2000, its scheduled completion date has slipped by nearly a decade, and additional significant cost increases and schedule delays are likely to occur because DOE has not fully resolved the technical challenges faced by the project. In addition, DOE recently reported that Bechtel’s actions to take advantage of potential cost savings opportunities are frequently delayed and, as a result, rising costs are outpacing opportunities for savings. DOE’s original contract price for constructing the WTP, approved in 2000, stated that the project would cost $4.3 billion and be completed in 2011. In 2006, however, DOE revised the cost baseline to $12.3 billion, nearly triple the initial estimate, with a completion date of 2019. As we reported in 2006, contractor performance problems, weak DOE management, and technical challenges resulted in these cost increases and schedule delays. A 2011 DOE headquarters review report on the WTP projected additional cost increases of $800 million to $900 million over the revised 2006 cost estimate of $12.3 billion and additional delays to the project schedule. Furthermore, in November 2011, the Department of Justice notified the state of Washington that there is a serious risk that DOE may be unable to meet the legally enforceable milestones required by legal agreement, for completing certain activities in Hanford’s WTP construction and startup activities, as well as tank waste retrieval activities. The Department of Justice did not identify the cause of the delay or specify the milestones that could be affected. As of May 2012, according to our analysis, the project’s total estimated cost had increased to $13.4 billion, and additional cost increases and schedule delays are likely, although a new performance baseline has not yet been developed and approved. DOE ORP officials warn that cost increases and schedule delays will occur as a result of funding shortfalls and will prevent the department from successfully resolving technical challenges the WTP project faces. However, from fiscal years 2007 to 2010, the project was appropriated the $690 million that DOE requested in its annual congressional budget request. In fiscal years 2011 and 2012, DOE received approximately $740 million each year––a $50 million increase over fiscal year 2010 funding. DOE project management officials and Bechtel representatives told us that $740 million for fiscal year 2012 was not enough to support planned work and, as a result, project work would slow down and project staffing levels would be reduced. However, according to senior DOE officials, including the acting Chief Financial Officer, the primary cause of the increasing costs and delayed completion has been the difficulty in resolving complex technical challenges rather than funding issues. DOE and Bechtel have not yet fully estimated the effect of resolving these technical challenges on the project’s baseline. In February 2012, DOE directed Bechtel to develop a new, proposed cost and schedule baseline for the project and, at the same time, to begin a study of alternatives that includes potential changes to the WTP’s design and operational plans to resolve technical challenges faced by the project. The study is to also identify the cost and schedule impact of these alternatives on the project. For example, according to a DOE official, one alternative Bechtel is studying is to construct an additional facility that would process the tank waste by removing the largest solid particles from the waste before it enters WTP’s pretreatment facility. This advance processing would reduce the risks posed by insufficient mixing of the waste in the pretreatment facility by the pulse jet mixers. A DOE official told us that this alternative could add $2 to $3 billion to the overall cost of the project and further delay its completion by several years. According to DOE officials, other alternatives being studied involve reducing the total amount of waste the WTP treats or operating the WTP at a slower pace for a longer period of time to accomplish its waste processing mission. However, these alternatives could delay the total time needed to process Hanford’s waste and add billions of dollars to the total cost to treat all of Hanford’s tank waste. Further delays constructing the WTP could also result in significant cost increases to treat all of Hanford’s waste. For example, DOE has estimated that a 4-year delay in the WTP start-up date could add an additional $6 to $8 billion to the total cost of the Hanford Site tank waste treatment mission. In June 2012, DOE announced that the new cost and schedule baseline Bechtel is developing would not include the pretreatment and high-level waste facilities. According to DOE officials, additional testing and analysis is needed to resolve the facilities’ technical challenges before a comprehensive new cost and schedule baseline can be completed. DOE officials responsible for overseeing the WTP project are uncertain when the new baseline for these facilities will be completed. As a result, our May 2012 cost estimate of $13.4 is highly uncertain and could grow substantially if the technical challenges that the project faces are not easily and quickly resolved. DOE and Bechtel have identified some opportunities for cost savings, but these opportunities are not always pursued in a timely fashion. For example, Bechtel has identified an estimated $48 million in savings that could be achieved over the life of the project by accelerating specific areas of the project scope. Specifically, some of these savings could be achieved by acquiring material and equipment in bulk to maintain the pace of construction activities and avoid delays. In addition, another $24 million in savings could be achieved by reducing the amount of steel, pipe, wire, and other materials needed in remaining design work. DOE reported in March 2012, however, that Bechtel’s actions to take advantage of potential cost savings opportunities are frequently delayed and, as a result, rising costs have outpaced opportunities for savings. For example, DOE reported that Bechtel continues to perform poorly in meeting planned dates for material delivery due to delayed identification and resolution of internal issues impacting procurement of plant equipment. Specifically, DOE noted that, of 95 needed project equipment deliveries scheduled for July 2011 through October 2011, 42 were delivered on time and that this poor performance trend is expected to continue. DOE is taking steps to improve its management and oversight of Bechtel’s activities, including levying penalties on the contractor for quality and safety problems but continues to face challenges to completing the WTP project within budget and on schedule. For example, DOE’s continued use of a fast-track, design-build management approach where construction on the project has moved forward before design activities are complete has resulted in costly reworking and schedule delays. DOE is taking steps to improve its management and oversight of Bechtel’s activities. For example, in November 2011, DOE’s Office of Enforcement and Oversight started an investigation into Bechtel’s potential noncompliance with DOE’s nuclear safety requirements. Specifically, this DOE office is investigating Bechtel’s processes for designing, procuring, and installing structures, systems, and components and their potential noncompliance with DOE nuclear safety requirements. If the contractor is found to not be complying with DOE requirements, DOE’s Office of Enforcement and Oversight is authorized to take appropriate action, including the issuance of notices of violations and proposed civil penalties against Bechtel. Since 2006, DOE’s Office of Enforcement and Oversight has conducted six investigations into Bechtel’s activities at WTP that resulted in civil penalty against Bechtel. Five of the six investigations involved issues related to the design and safe operation of the WTP. For example, in 2008, DOE’s Office of Enforcement and Oversight investigated Bechtel for circumstances associated with procurement and design deficiencies of equipment for the WTP and identified multiple violations of DOE nuclear safety requirements. This investigation resulted in Bechtel receiving a $385,000 fine. In addition, in January 2012, DOE’s Office of Health, Safety, and Security reported that some aspects of the WTP design may not comply with DOE safety requirements. Specifically, under DOE safety regulations, Bechtel must complete a preliminary documented safety analysis—an analysis that demonstrates the extent to which a nuclear facility can be operated safely with respect to workers, the public, and the environment. However, Bechtel’s preliminary documented safety analyses have not always kept pace with the frequently changing designs and specifications for the various WTP facilities and DOE oversight reviews have highlighted significant deficiencies in the project’s safety analyses. In November 2011, according to DOE officials, DOE ordered Bechtel to suspend work on design, procurement, and installation activities for several major WTP systems including parts of the pretreatment facility and high-level waste facility until the contractor demonstrates that these activities are aligned with DOE nuclear safety requirements. This suspension remains in effect. DOE has also taken steps to address concerns about the project’s safety culture. According to DOE’s Integrated Safety Management System Guide, safety culture is an organization’s values and behaviors modeled by its leaders and internalized by its members, which serves to make safe performance of work the overriding priority to protect workers, the public, and the environment. In 2011, the Safety Board issued the results of an investigation into health and safety concerns at WTP. The investigation’s principal conclusion was that the prevailing safety culture of the WTP project effectively defeats DOE’s policy to establish and maintain a strong safety culture at its nuclear facilities. The Safety Board found that both the DOE and Bechtel project management behaviors reinforce a subculture at WTP that deters the timely reporting, acknowledgement, and ultimate resolution of technical safety concerns. In addition, the Safety Board found a flawed safety culture embedded in the project at the time had a substantial probability of jeopardizing the WTP mission. As a result of these findings, the Safety Board made a series of recommendations to DOE to address WTP project safety problems. DOE has developed implementation plans to address the Safety Board’s recommendations. In addition, DOE itself has raised significant concerns about WTP safety culture. In 2011 DOE’s Office of Health, Safety, and Security conducted an independent assessment of the nuclear safety culture and management of nuclear safety concerns at the WTP. As a result of this assessment, DOE determined that most DOE and Bechtel WTP staff at the WTP believed that safety is a high priority. However, DOE also determined that a significant number of DOE and Bechtel staff expressed reluctance to raise concerns about safety or quality of WTP facilities design because WTP project management does not create an atmosphere conducive to hearing concerns or for fear of retaliation. Employees’ willingness to raise safety concerns without fear of retaliation is an essential element of a healthy safety culture and creating an atmosphere where problems can be identified. DOE’s assessment also determined that DOE has mechanisms in place to address safety culture concerns. For example, according to a January 2012 issued DOE Office of Health, Safety, and Security report on the safety culture and safety management of the project, the project has an employee’s concerns program and a differing professional opinion program that assist staff to raise safety concerns. In addition, the January 2012 issued report stated that several DOE reviews of the WTP project have been effective in identifying deficiencies in WTP designs and vulnerabilities that could impact the future operation of waste treatment facilities. DOE has taken some steps to improve its management and oversight of Bechtel’s activities, but some problems remain. For example, DOE’s ongoing use of a fast-track, design-build approach continues to result in cost and schedule problems. As we reported in 2006, DOE’s management of the project has been flawed, as evidenced by DOE’s decision to adopt a fast-track, design-build approach to design and construction activities, and its failure to exercise adequate and effective oversight of contractor activities, both of which contributed to cost and schedule delays. According to DOE officials, DOE’s current project management orders will not allow the use of the fast-track, design-build approach for first-of-its-kind complex facilities such as the WTP.However, DOE was able to start the project using the fast-track, design- build approach before this order was in place. In a February 2012 written statement, DOE defended the fast-track, design-build management approach for the WTP project by stating that: (1) it allows for a single contract that gives the contractor responsibility for designing, building, and commissioning the facility, thus helping ensure that the design works as expected; (2) it allows the contractor to begin construction on parts of the facility for which design was complete; and (3) doing so would encourage construction to be completed faster. According to DOE officials, construction of the WTP is currently more than 55 percent complete, though the design is only about 80 percent complete. Nuclear industry guidelines suggest that design should be complete to at least 90 percent before starting construction of nuclear facilities. Furthermore, according to current DOE orders, construction should not begin until engineering and design work on critical technologies is essentially complete, and these technologies have been tested and proven to work. According to DOE’s analysis in 2007, several years after the beginning of WTP construction, several critical technologies designed for the WTP had not yet reached this level of In addition, current DOE guidance states that the design- readiness. build approach can be used most successfully with projects that have well-defined requirements, are not complex, and have limited risks. DOE measures technology readiness using Technology Readiness Levels, which range from 1 to 9; where 9 represents a fully tested and proven technology. DOE guidance indicates that critical technologies should be at Technology Readiness Level 6 or higher before construction begins. However, in 2007, the last time DOE assessed Technical Readiness Levels for the entire project, DOE found that 14 out of 21 critical technologies assessed were at a Technology Readiness Level lower than 6. keep pace with the construction schedule, Bechtel fabricated 38 vessels containing pulse jet mixers and installed 27 of them into the WTP pretreatment and high-level waste facilities. However, according to DOE officials, Bechtel has been forced to halt construction on the pretreatment facility and parts of the high-level waste facility because it was unable to verify that several vessels would work as designed and meet safety requirements. Bechtel is currently analyzing potential alternatives that include, among other things, scrapping 5 to 10 already completed vessels and replacing them with vessels with more easily verifiable designs, according to DOE officials. The cost and schedule impact of these alternatives has not yet been fully estimated. DOE has also experienced continuing problems overseeing its contractor’s activities. For example, DOE’s incentives and management controls are inadequate for ensuring effective project management and oversight of the WTP project to ensure that the WTP project is completed within budget and on schedule. As we reported in 2006, DOE did not ensure adherence to normal project reporting requirements and as a result, status reports provided an overly optimistic assessment of progress on the project. We also questioned the adequacy of project incentives for ensuring effective project management. Specifically, because of cost increases and schedule delays, we noted that the incentive fees in the original contract—including more than $300 million in potential fees for meeting cost and schedule goals or construction milestones—were no longer meaningful. Since that time, some problems have continued. For example, Bechtel’s current contract, which was modified in 2009, allows the contractor to receive substantial incentives, such as an award fee for achieving specified project objectives, and DOE has paid this fee, although events subsequently revealed that the project was likely to exceed future cost and schedule estimates. Since 2009, DOE has paid Bechtel approximately $24.2 million or 63 percent of its $38.6 million incentive fee based, in part, on Bechtel’s adherence to cost and schedule targets and its resolution of technical challenges associated with waste mixing. However, the WTP project is now at serious risk of missing major future cost and schedule targets, and it was subsequently determined by DOE that the waste mixing technical challenges were not resolved after all. According to DOE officials, substantial further effort is needed that will take at least an additional 3 years of testing and analysis until project scientists and engineers can fully resolve this challenge. In the current contract, there is no contractual mechanism for recovering an incentive fee that was paid to a contractor for work that was subsequently determined to be insufficient, according to DOE officials. Furthermore, under its project management order, DOE is to incorporate and manage an appropriate level of risk—including critical technical, performance, schedule, and cost risks—to ensure the best value for the government. However, DOE has no assurance that the incentives included in the WTP construction contract are assisting in the effective management of these risks. The contract provides that “incentives are structured to ensure a strong financial motivation for the Contractor to achieve the Contract requirements.” However, the contract requirements have been, and continue to be, revised to provide for a longer schedule and higher cost. For example, DOE has already announced that the project will not be completed within the 2006 performance baseline and has directed the contractor to prepare a revised performance baseline. Further, since 2009, DOE has awarded $15.6 million in incentive fees to Bechtel for meeting periodic schedule and cost goals, even though the WTP’s schedule has slipped, and construction costs have continued to increase. Bechtel has estimated, as of May 2012, that costs to complete the project are currently more than $280 million over the amount specified in the construction contract. DOE’s Inspector General has also found that DOE may have awarded Bechtel fees without the contractor adequately fulfilling work. A 2012 DOE Office of Inspector General report notes that DOE may have overpaid $15 million of potentially $30 million in incentive fees for the delivery and installation of vessels into the WTP facility. When DOE learned that one of the vessels did not have quality assurance records and therefore did not conform to contract requirements, it instructed Bechtel to return $15 million of the performance fee. However, according to the DOE Office of Inspector General report, neither DOE nor Bechtel could provide evidence that the fee was returned to DOE. DOE’s oversight of Bechtel’s activities may also be hampered because project reviews, such as external independent reviews or independent project reviews—which are a key oversight mechanism—are only required by DOE’s project management order to occur at major decision points in a project. These reviews examine a project’s estimated cost, scope, and schedule and are intended to provide reasonable assurance that the project can be successfully executed on time and within budget. For example, these independent reviews are to occur when a cost and schedule baseline is completed for the project or when construction is authorized to begin. A 2006 review conducted by the U.S. Army Corps of Engineers, for example, identified serious problems with Bechtel’s progress on the WTP project and indicated that the project would significantly exceed both cost and schedule targets. In 2009, the Office of Project Management also conducted an external independent review. Such reviews are an important mechanism for overseeing DOE contractor activities. In a large, complex, multiyear project such as WTP, however, many years can pass between these critical decision points and the associated independent reviews. DOE officials noted that other reviews, such as Construction Project Reviews, were also completed between 2009 and 2011 for the WTP project. While officials stated that these reviews did examine the project’s cost and schedule, they noted that the reviews were not as extensive as the 2006 and 2009 reviews. DOE is responsible for one of the world’s largest environmental cleanup projects in which it must stabilize large quantities of hazardous and radioactive waste and prepare it for disposal at a permanent national geologic repository that has yet to be identified. By just about any definition, DOE’s WTP project at Hanford has not been a well-planned, well-managed, or well-executed major capital construction project. Daunting technical challenges that will take significant effort and years to resolve combined with a near tripling of project costs and a decade of schedule delays raise troubling questions as to whether this project can be constructed and operated successfully. Additional cost increases amounting to billions of dollars and schedule delays of years are almost certain to occur. DOE and Bechtel officials have stated that the most recent cost increases and schedule delays are the result of, among other things, Congress not providing the required funding to resolve technical issues. In our view, however, the more credible explanation continues to be DOE’s decision to build what the department itself describes as the world’s largest and most complex nuclear waste treatment plant using a fast-track, design-build strategy that is more appropriate for much simpler, smaller scale construction projects. Where nuclear industry guidelines suggest completing 90 percent of design prior to beginning construction, DOE instead began construction when design of the facility was in the early stages and insisted on developing new technologies and completing design efforts while construction was ongoing. The result has been significant design rework, and some already procured and installed equipment to possibly be removed, refabricated, and reinstalled. The technical challenges are especially acute in the WTP’s pretreatment and high-level waste facilities. Technologies for these facilities require perfect reliability over the plant’s 40-year lifetime because no maintenance or repair will be possible once waste treatment begins. According to DOE’s analysis, several critical technologies designed for the WTP have not been tested and verified as effective. Additional expensive rework in the pretreatment and high-level waste facilities, particularly in the area of waste mixing, is likely to occur. Further, an additional facility to treat tank waste before the waste arrives at the WTP’s pretreatment facility may be required. This additional facility could add billions to the cost of treating Hanford’s waste. All the while, DOE and outside experts continue to raise safety concerns, and Bechtel continues to earn incentive fees for meeting specific project objectives even as the project’s costs and timelines balloon far beyond the initially planned goals. DOE’s recent actions to identify cost savings opportunities and to hold Bechtel accountable for the significant deficiencies in its preliminary documented safety analyses and requiring the contractor to comply with DOE’s nuclear safety regulations are steps in the right direction. However, we continue to have serious concerns not only about the ultimate cost and final completion date for this complex project, but whether this project can successfully accomplish its waste treatment mission given that several critical technologies have not been tested and verified. To improve DOE’s management and oversight of the WTP project, we recommend that the Secretary of Energy take the following three actions: Do not resume construction on the WTP’s pretreatment and high-level waste facilities until critical technologies are tested and verified as effective, the facilities’ design has been completed to the level established by nuclear industry guidelines, and Bechtel’s preliminary documented safety analyses complies with DOE nuclear safety regulations. Ensure the department’s contractor performance evaluation process does not prematurely reward contractors for resolving technical issues later found to be unresolved. For example, DOE could seek to modify its contracts to withhold payment of incentive fees until the technical challenges are independently verified as resolved. Take appropriate steps to determine whether any incentive payments made to the contractor for meeting project milestones were made erroneously and, if so, take appropriate actions to recover those payments. We provided DOE with a draft of this report for its review and comment. DOE generally agreed with the report and its recommendations. In its written comments, DOE described actions under way to address the first recommendation, as well as additional steps it plans to take to address each of the report’s recommendations. DOE stated that it has recently taken action that is, in part, aligned with the first recommendation. Specifically, it issued guidance to the contractor, which directed the contractor to address remaining WTP technical and management issues sufficient to produce a high confidence design and baseline for the pretreatment and high-level waste facilities of the WTP. DOE also established a limited construction activity list for the high-level waste facility, as well as a much more limited set of construction activities in the pretreatment facility, which DOE stated will allow it to complete construction of some portions of the facilities while taking into account the unresolved technical issues. DOE stated that it believes this approach balances the intent of the recommendation and the need to continue moving forward with the project and preparations to remove waste from Hanford waste storage tanks. While this approach appears reasonable, we would caution that DOE should sufficiently monitor the construction activities to ensure that additional construction beyond the activities specifically named on the approved list not be undertaken until the technical and management issues are satisfactorily resolved. DOE also noted that the Secretary of Energy has been actively engaged in the development of a new approach to managing the WTP and, together with a group of independent subject matter experts, is working to resolve long-standing technical issues. As requested by DOE, we did incorporate information into the report to indicate the Secretary’s personal involvement in addressing the WTP issues and the technical teams assembled to help resolve these persistent technical issues. In addition, DOE stated that the department and the contractor have implemented a plan to assure that the WTP documented safety analysis will meet the department’s nuclear safety requirements and DOE established a Safety Basis Review Team that will provide a mechanism for reviewing the documented safety analyses for each facility to ensure it meets nuclear safety requirements. DOE’s planned actions to address the recommendations in this report are discussed more fully in DOE’s letter, which is reproduced in appendix I. DOE also provided technical clarifications, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Energy; the appropriate congressional committees; the Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the individual named above, Ryan T. Coles and Janet Frisch, Assistant Directors; Gene Aloise; Scott Fletcher; Mark Gaffigan; Richard Johnson; Jeff Larson; Mehrzad Nadji; Alison O’Neill; Kathy Pedalino; Tim Persons; Peter Ruedel; and Ron Schwenn made key contributions to this report. | In December 2000, DOE awarded Bechtel a contract to design and construct the WTP project at DOE's Hanford Site in Washington State. This project--one of the largest nuclear waste cleanup facilities in the world-- was originally scheduled for completion in 2011 at an estimated cost of $4.3 billion. Technical challenges and other issues, however, have contributed to cost increases and schedule delays. GAO was asked to examine (1) remaining technical challenges, if any, the WTP faces; (2) the cost and schedule estimates for the WTP; and (3) steps DOE is taking, if any, to improve the management and oversight of the WTP project. GAO reviewed DOE and contractor data and documents, external review reports, and spoke with officials from DOE and the Defense Nuclear Facilities Safety Board and with contractors at the WTP site and test facilities. The Department of Energy (DOE) faces significant technical challenges in successfully constructing and operating the Waste Treatment and Immobilization Plant (WTP) project that is to treat millions of gallons of highly radioactive liquid waste resulting from the production of nuclear weapons. DOE and Bechtel National, Inc. identified hundreds of technical challenges that vary in significance and potential negative impact and have resolved many of them. Remaining challenges include (1) developing a viable technology to keep the waste mixed uniformly in WTP mix tanks to both avoid explosions and so that it can be properly prepared for further processing; (2) ensuring that the erosion and corrosion of components, such as tanks and piping systems, is effectively mitigated; (3) preventing the buildup of flammable hydrogen gas in tanks, vessels, and piping systems; and (4) understanding better the waste that will be processed at the WTP. Until these and other technical challenges are resolved, DOE will continue to be uncertain whether the WTP can be completed on schedule and whether it will operate safely and effectively. Since its inception in 2000, DOE's estimated cost to construct the WTP has tripled and the scheduled completion date has slipped by nearly a decade to 2019. GAO's analysis shows that, as of May 2012, the project's total estimated cost had increased to $13.4 billion, and significant additional cost increases and schedule delays are likely to occur because DOE has not fully resolved the technical challenges faced by the project. DOE has directed Bechtel to develop a new cost and schedule baseline for the project and to begin a study of alternatives that include potential changes to the WTP's design and operational plans. These alternatives could add billions of dollars to the cost of treating the waste and prolong the overall waste treatment mission. DOE is taking steps to improve its management and oversight of Bechtel's activities but continues to face challenges to completing the WTP project within budget and on schedule. DOE's Office of Health, Safety, and Security has conducted investigations of Bechtel's activities that have resulted in penalties for design deficiencies and for multiple violations of DOE safety requirements. In January 2012, the office reported that some aspects of the WTP design may not comply with DOE safety standards. As a result, DOE ordered Bechtel to suspend work on several major WTP systems, including the pretreatment facility and parts of the high-level waste facility, until Bechtel can demonstrate that activities align with DOE nuclear safety requirements. While DOE has taken actions to improve performance, the ongoing use of an accelerated approach to design and construction--an approach best suited for well-defined and less-complex projects--continues to result in cost and schedule problems, allowing construction and fabrication of components that may not work and may not meet nuclear safety standards. While guidelines used in the civilian nuclear industry call for designs to be at least 90 percent complete before construction of nuclear facilities, DOE estimates that WTP is more than 55 percent complete though the design is only 80 percent complete. In addition, DOE has experienced continuing problems overseeing its contractor's activities. For example, DOE's incentives and management controls are inadequate for ensuring effective project management, and GAO found instances where DOE prematurely rewarded the contractor for resolving technical issues and completing work. GAO recommends that DOE (1) not resume construction on WTPs pretreatment and high-level waste facilities until, among other things, the facilities design has been completed to the level established by nuclear industry guidelines; (2) ensure the departments contractor performance evaluation process does not prematurely reward contractors for resolving technical issues later found to be unresolved; and (3) take appropriate steps to determine whether any incentive payments were made erroneously and, if so, take actions to recover them. DOE generally agreed with the report and its recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to discuss the subject of internal control. Its importance cannot be understated, especially in the large, complex operating environment of the federal government. Internal control is the first line of defense against fraud, waste, and abuse and helps to ensure that an entity’s mission is achieved in the most effective and efficient manner. Although the subject of internal control usually surfaces for discussion after improprieties or inefficiencies are found, good managers are always aware of and seek ways to help improve operations through effective internal control. As you requested, my testimony today will discuss the following questions: (1) What is internal control? (2) Why is it important? and (3) What happens when it breaks down? “The plan of organization and methods and procedures adopted by management to ensure that resource use is consistent with laws, regulations, and policies; that resources are safeguarded against waste, loss, and misuse; and that reliable data are obtained, maintained, and fairly disclosed in reports.” Internal control should not be looked upon as separate, specialized systems within an agency. Rather, internal control should be recognized as an integral part of each system that management uses to regulate and guide its operations. Internal control is synonymous with management control in that the broad objectives of internal control cover all aspects of agency operations. Although ultimate responsibility for good internal control rests with management, all employees have a role in the effective operation of internal control that has been set by management. to name a few) that achieve the goal. All internal controls have objectives and techniques. In practice, internal control starts with defining entitywide objectives and then more specific objectives throughout the various levels in the entity. Techniques are then implemented to achieve the objectives. In its simplest form, internal control is practiced by citizens in the daily routine of everyday life. For example, when you leave your home and lock the door or when you lock your car at the mall or on a street, you are practicing a form of internal control. The objective is to protect your assets against undesired access, and your technique is to physically secure your assets by locks. In another routine, when you write a check, you record the check in the ledger or on your personal computer. The objective is to control the money in your checking account by knowing the balance. The technique is to document the check amount and the balance. Periodically, you compare the checking account transactions and balances you have recorded with the bank statement. Your objective is to ensure the accuracy of your records to avoid costly mistakes. Your technique is to perform the reconciliation. These same types of concepts form the basis for internal control in business operations and the operation of government. The nature of their operations is, of course, significantly larger and more complex, as is the inherent risk of ensuring that assets are safeguarded, laws and regulations are complied with, and data used for decision-making and reporting are reliable. Focusing a discussion on objectives and techniques, the acquisition, receipt, use, and disposal of property, such as computer equipment, can illustrate the practice of internal control in the operation of government activities. Internal control at the activity level such as procuring equipment should be preceded, at a higher organizational level, by policy and planning control objectives and control techniques that govern overall agency operations in achieving mission objectives. Examples of high-level control objectives that logically follow a pattern include the following: The mission of the agency should be set in accordance with laws, regulations, and administration and management policy. Agency components should be defined in accordance with the overall mission of the agency. Missions of the agency and components should be documented and communicated to agency personnel. Plans and budgets should be developed in accordance with the missions of the agency and its components. Policies and procedures should be defined and communicated to achieve the objectives defined in plans and budgets. Authorizations should be in accordance with policies and procedures. Systems of monitoring and reporting the results of agency activities should be defined. Transactions should be classified or coded to permit the preparation of reports to meet management’s needs and other reporting requirements. Access to assets should be permitted only in accordance with laws, regulations, and management’s policy. Examples of control techniques to help achieve the objectives include the following: agency and component mission statements approved by management and its legal counsel; training of personnel in mission and objectives; long and short-range plans developed related to budgets; monitoring of results against plans and budgets; policies and procedures defined and communicated to all levels of the organization and periodically reviewed and revised based on internal reviews; authorizations defined, controls set to ensure authorizations are made, and classifications of accounts set to permit the capture and reporting of data to prepare required reports; and physical restrictions on access to assets and records, and training in security provided to employees. The policy and planning control objectives and techniques provide a framework to conduct agency operations and to account for resources and results. Without that framework, administration and legislative goals may not be achieved; laws and regulations may be violated; operations may not be effective and efficient and may be misdirected; unauthorized activities may occur; inaccurate reports to management and others may occur; fraud, waste, and abuse is more likely to occur and be concealed; assets may be stolen or lost; and ultimately the agency is in danger of not achieving its mission. intended results. The procurement and management of computer equipment is an example of such a specific activity. Objectives and techniques should be established for each activity’s specific control. As examples of control objectives, vendors should be approved in accordance with laws, regulations, and management’s policy, as should the types, quantities, and approved purchase prices of computer equipment. As examples of related control techniques, criteria for approving vendors should be established and approved vendor master files should be controlled, and the purchase governed by criteria, such as obtaining competitive bids and setting specifications of the equipment to be procured. Likewise, control objectives should be set for the receiving process. For example, only equipment that meets contract or purchase order terms should be accepted, and equipment accepted should be accurately and promptly reported. Related control techniques include (1) detailed comparison of equipment received to a copy of the purchase order, (2) prenumbered controlled receiving documents that are accounted for, and (3) maintenance of receiving logs. Throughout the purchasing and receiving of equipment there needs to be appropriate separation of duties and interface with the accounting function to achieve funds control, timely payments, and inventorying and control of equipment received. Equipment received should be safeguarded to prevent unauthorized access and use. For example, in addition to physical security, equipment should be tagged with identification numbers and placed into inventory records. Equipment placed into service should only be issued to authorized users and records of the issuances should be maintained to achieve accountability. Further, physical inventories should be taken periodically and compared with inventory records. Differences in counts and records should be resolved in a timely manner and appropriate corrective actions taken. Also, equipment retired from use should be in accordance with management’s policies, including establishing appropriate safeguards to prevent unauthorized information that may be stored in the equipment from being disclosed. carelessness. Also, procedures whose effectiveness depends on segregation of duties can be circumvented by collusion. Similarly, management authorizations may be ineffective against errors or fraud perpetrated by management. In addition, the standard of reasonable assurance recognizes that the cost of internal control should not exceed the benefit derived. Reasonable assurance equates to a satisfactory level of confidence under given considerations of costs, benefits, and risks. The cost of fraud, waste, and abuse cannot always be measured in dollars and cents. Such improper activities erode public confidence in the government’s ability to efficiently and effectively manage its programs. Management at a number of federal government agencies are faced with tight budgets and fewer personnel. In such an environment, related operating factors, such as executive and middle management turnover and the diversity and complexity of government operations, can provide a fertile environment for internal control weakness and the resulting undesired consequences. It has been almost 50 years since the Congress formally recognized the importance of internal control. The Accounting and Auditing Act of 1950 required, among other things, that agency heads establish and maintain effective internal controls over all funds, property, and other assets for which an agency is responsible. However, the ensuing years up through the 1970s saw the government experience a crisis of poor controls. To help restore confidence in government and to improve operations, the Congress passed the Federal Managers’ Financial Integrity Act of 1982. The Integrity Act required, among other items, that we establish internal control standards that agencies are required to adhere to, the Office of Management and Budget (OMB) issue guidelines for agencies to follow in annually assessing their internal controls, agencies annually evaluate their internal controls and prepare a statement to the President and the Congress on whether their internal controls comply with the standards issued by GAO, and agency reports include material internal control weaknesses identified and plans for correcting the weaknesses. OMB has issued agency guidance that sets forth the requirements for establishing, periodically assessing, correcting, and reporting on controls required by the Integrity Act. Regarding the identification and reporting of deficiencies, OMB’s guidance states that “a deficiency should be reported if it is or should be of interest to the next level of management. Agency employees and managers generally report deficiencies to the next supervisory level, which allows the chain of command structure to determine the relative importance of each deficiency.” The guidance further states that “a deficiency that the agency head determines to be significant enough to be reported outside the agency (i.e., included in the annual Integrity Act report to the President and the Congress) shall be considered a ’material weakness.’” The guidance encourages reporting of deficiencies by recognizing that such reporting reflects positively on the agency’s commitment to recognizing and addressing management problems and, conversely, failing to report a known deficiency reflects adversely on the agency. separation of duties between authorizing, processing, recording, and qualified and continuous supervision to ensure that control objectives are achieved; and limiting access to resources and records to authorized persons to provide accountability for the custody and use of resources. Finally, the audit resolution standard requires managers to promptly evaluate findings, determine proper resolution, and establish corrective action or otherwise resolve audit findings. Attachment I provides a complete definition of the standards and Standards for Internal Controls in the Federal Government provides additional explanation of the standards. Financial Officers Act report whether each agency is maintaining financial management systems that comply substantially with federal financial management systems requirements, federal accounting standards, and the government’s standard general ledger at the transaction level. Our report, The Statutory Framework for Performance-Based Management and Accountability (GAO/AIMD-98-52, January 28, 1998) provides more detailed information on the purpose, requirements, and implementation status of these acts. In addition, that report refers to a number of other critically important statutes that address debt collection, credit reform, prompt pay, inspectors general, and information resources management. Although these acts address specific problem areas, sound internal controls are an essential factor in the success of these statutes. For example, the Results Act focuses on results through strategic and annual planning and performance reporting. Sound internal control is critical to effectively and efficiently achieving management’s plans and for obtaining accurate data to support performance measures. Weak internal controls pose a significant risk to government agencies. History has shown that serious neglect will result in losses to the government that can total millions, and even billions, of dollars over time. As previously mentioned, the loss of confidence in government that results can be equally serious. Although examples of poor internal controls could be drawn from many federal programs, three key areas illustrate the extent of the problems—health care, banking, and property. The Department of Human and Human Services Inspector General reported this past year that out of $163.6 billion in processed fee-for-service payments reported by the Health Care Financing Administration (HCFA) during fiscal year 1996—the latest year for which reliable numbers were available—an estimated $23.2 billion, or about 14.6 percent of the total payments, were improper. Consequently, the Inspector General recommended that HCFA implement internal controls designed to detect and prevent improper payments to correct four weaknesses where (1) insufficient or no documentation supporting claims existed, (2) medical necessity was not established, (3) incorrect classification (called coding) of information existed, and (4) unsubstantiated/unallowable services were paid. During the 1980s, the savings and loan industry experienced severe financial losses. Extremely high interest rates caused institutions to pay high costs for deposits and other funds while earning low yields on their long-term portfolios. Many institutions took inappropriate or risky approaches in attempting to increase their capital. These approaches included accounting methods to artificially inflate the institutions’ capital position and diversifying their investments into potentially more profitable, but riskier, activities. The profitability of many of these investments depended heavily on continued inflation in real estate values to make them economically viable. In many cases, weak internal controls at these institutions and noncompliance with laws and regulations increased the risk of these activities and contributed significantly to the ultimate failure of over 700 institutions. This crisis cost the taxpayers hundreds of billions of dollars. Making profitable loans is the heart of a successful savings and loan institution. Boards of directors and senior management did not actively monitor the loan award and administrative processes to ensure excessive risks in making loans were not taken. In fact, excessive risk-taking in making loans was encouraged, resulting in a lack of effective monitoring of loan performance that allowed poorly performing loans to continue to deteriorate. Also, loan documentation was a frequent problem that further evidenced weak internal supervision of loan officers and created difficulties in valuing and selling loans after the institutions failed. was not made available for reuse or effectively controlled against misuse or theft. More recently, we reported that breakdowns exist in the Department of Defense’s (DOD) ability to protect its assets from fraud, waste, and abuse. We disclosed that the Army did not have accurate records for its reported $30 billion in real property or the $8.5 billion reported as government furnished property in the hands of contractors. Further, we reported that pervasive weaknesses in DOD’s general computer controls place it at risk of improper modification; theft; inappropriate disclosure; and destruction of sensitive personnel, payroll, disbursement, or inventory information. Beginning in 1990, we began a special effort to review and report on the federal program areas our work had identified as high risk because of vulnerabilities to waste, fraud, abuse, and mismanagement. This effort brought a much-needed central focus on problems that were costing the government billions of dollars. Our most recent high-risk series issued focuses of six categories of high risk: (1) providing for accountability and cost-effective management of defense programs, (2) ensuring that all revenues are collected and accounted for, (3) obtaining an adequate return on multibillion dollar investments in information technology, (4) controlling fraud, waste, and abuse in benefit programs, (5) minimizing loan program losses, and (6) improving management of federal contracts at civilian agencies. See attachment II for a listing of the high-risk reports and our most recent reports and testimony on the Year 2000 computing crisis. In conclusion, effective internal controls are essential to achieving agency missions and the results intended by the Congress and the administration and as reasonably expected by the taxpayers. The lack of consistently effective internal controls across government has plagued the government for decades. Legislation has been enacted to provide a framework for performance-based management and accountability. Effective internal controls are an essential component of the success of that legislation. However, no system of internal control is perfect, and the controls may need to be revised as agency missions and service delivery change to meet new expectations. Management and employees should focus not necessarily on more controls, but on more effective controls. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. Internal control standards define the minimum level of quality acceptable for internal control systems to operate and constitute the criteria against which systems are to be evaluated. These internal control standards apply to all operations and administrative functions but are not intended to limit or interfere with duly granted authority related to the development of legislation, rule making, or other discretionary policy-making in an agency. 1. Reasonable Assurance: Internal control systems are to provide reasonable assurance that the objectives of the systems will be accomplished. 2. Supportive Attitude: Managers and employees are to maintain and demonstrate a positive and supportive attitude toward internal controls at all times. 3. Competent Personnel: Managers and employees are to have personal and professional integrity and are to maintain a level of competence that allows them to accomplish their assigned duties, and understand the importance of developing and implementing good internal controls. 4. Control Objectives: Internal control objectives are to be identified or developed for each agency activity and are to be logical, applicable, and reasonably complete. 5. Control Techniques: Internal control techniques are to be effective and efficient in accomplishing their internal control objectives. 1. Documentation: Internal control systems and all transactions and other significant events are to be clearly documented, and the documentation is to be readily available for examination. 2. Recording of Transactions and Events: Transactions and other significant events are to be promptly recorded and properly classified. 3. Execution of Transactions and Events: Transactions and other significant events are to be authorized and executed only by persons acting within the scope of their authority. 4. Separation of Duties: Key duties and responsibilities in authorizing, processing, recording, and reviewing transactions should be separated among individuals. 5. Supervision: Qualified and continuous supervision is to be provided to ensure that internal control objectives are achieved. 6. Access to and Accountability for Resources: Access to resources and records is to be limited to authorized individuals, and accountability for the custody and use of resources is to be assigned and maintained. Periodic comparison shall be made of the resources with the recorded accountability to determine whether the two agree. The frequency of the comparison shall be a function of the vulnerability of the asset. Prompt Resolution of Audit Findings: Managers are to (1) promptly evaluate findings and recommendations reported by auditors, (2) determine proper actions in response to audit findings and recommendations, and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. High-Risk Series: An Overview (GAO/HR-97-1, February 1997). High-Risk Series: Quick Reference Guide (GAO/HR-97-2, February 1997). High-Risk Series: Defense Financial Management (GAO/HR-97-3, February 1997). High-Risk Series: Defense Contract Management (GAO/HR-97-4, February 1997). High-Risk Series: Defense Inventory Management (GAO/HR-97-5, February 1997). High-Risk Series: Defense Weapons Systems Acquisition (GAO/HR-97-6, February 1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, February 1997). High-Risk Series: IRS Management (GAO/HR-97-8, February 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). High-Risk Series: Medicare (GAO/HR-97-10, February 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, February 1997). High-Risk Series: Department of Housing and Urban Development (GAO/HR-97-12, February 1997). High-Risk Series: Department of Energy Contract Management (GAO/HR-97-13, February 1997). High-Risk Series: Superfund Program Management (GAO/HR-97-14, February 1997). High-Risk Program Information on Selected High-Risk Areas (GAO/HR-97-30 May 1997). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/ AIMD-10-1.19, Exposure Draft, March 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/ T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/ AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/T-AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/T-AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/ AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Compliance (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Affairs Computer Systems: Risks of VBA’s Year 2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the subject of internal control, focusing on: (1) what internal control is; (2) its importance; and (3) what happens when it breaks down. GAO noted that: (1) internal control is concerned with stewardship and accountability of resources consumed while striving to accomplish an agency's mission with effective results; (2) although ultimate responsibility for internal controls rests with management, all employees have a role in the effective operation of internal controls established by management; (3) effective internal control provides reasonable, not absolute, assurance that an agency's activities are being accomplished in accordance with its control objectives; (4) internal control helps management achieve the mission of the agency and prevent or detect improper activities; (5) the cost of fraud cannot always be measured in dollars; (6) in 1982, Congress passed the Federal Managers' Financial Integrity Act requiring: (a) agencies to annually evaluate their internal controls; (b) GAO to issue internal controls standards; and (c) the Office of Management and Budget to issue guidelines for agencies to follow in assessing their internal controls; (7) more recently, Congress has enacted a number of statutes to provide a framework for performance-based management and accountability; (8) weak internal controls pose a significant risk to the government--losses in the millions, or even billions, of dollars can and do occur; (9) GAO and others have reported that weak internal controls over safeguarding and accounting for government property are a serious continuing problem; and (10) GAO's 1997 high-risk series identifies major areas of government operations where the risks of losses to the government is high and where achieving program goals is jeopardized. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
History is a good teacher. To solve the problems of today, it is important to avoid repeating past mistakes. Over the past 12 years, the department has initiated several broad-based departmentwide reform efforts intended to fundamentally reform its financial operations as well as other key business areas, including the Defense Reform Initiative, the Defense Business Operations Fund, and the Corporate Information Management initiative. These efforts, which are highlighted below, have proven to be unsuccessful despite good intentions and significant effort. The conditions that led to these previous attempts at reform remain largely unchanged today. Defense Reform Initiative (DRI). In announcing the DRI program in November 1997, the then Secretary of Defense stated that his goal was “to ignite a revolution in business affairs.” DRI represented a set of proposed actions aimed at improving the effectiveness and efficiency of DOD’s business operations, particularly in areas that had been long-standing problems—including financial management. In July 2000, we reported that while DRI got off to a good start and made progress in implementing many of the component initiatives, it did not meet expected time frames and goals, and the extent to which savings from these initiatives would be realized was yet to be determined. We noted that a number of barriers had kept the department from meeting its specific time frames and goals. The most notable barrier was institutional resistance to change in an organization as large and complex as DOD, particularly in such areas as acquisition, financial management, and logistics, which transcend most of the department’s functional organizations and have been long-standing management concerns. We also pointed out that DOD did not have a clear road map to ensure that the interrelationships between its major reform initiatives were understood and addressed and that it was investing in its highest priority requirements. We are currently examining the extent to which DRI efforts begun under the previous administration are continuing. Defense Business Operations Fund. In October 1991, DOD established a new entity, the Defense Business Operations Fund by consolidating nine existing industrial and stock funds and five other activities operated throughout DOD. Through this consolidation, the fund was intended to bring greater visibility and management to the overall cost of carrying out certain critical DOD business operations. However, from its inception, we reported that the fund did not have the policies, procedures, and financial systems to operate in a businesslike manner. In 1996, DOD announced the fund’s elimination. In its place, DOD established four working capital funds. DOD estimated that for fiscal year 2003 these funds would account for and control about $75 billion. These new working capital funds inherited their predecessor’s operational and financial reporting problems. Our reviews of these funds have found that they still are not in a position to provide accurate and timely information on the results of operations. As a result, working capital fund customers cannot be assured that the prices they are charged for goods and services represent actual costs. Corporate Information Management (CIM). The CIM initiative began in 1989 and was expected to save billions of dollars by streamlining operations and implementing standard information systems to support common business operations. CIM was expected to reform all of DOD’s functional areas—including finance, procurement, material management, and human resources—through consolidating, standardizing, and integrating information systems. DOD also expected CIM to replace approximately 2,000 duplicative systems. Over the years, we made numerous recommendations to improve CIM’s management to help preclude the wasteful use and mismanagement of billion of dollars. However, these recommendations were generally not addressed. Instead, DOD spent billions of dollars with little sound analytical justification. Rather than relying on a rigorous decision-making process for information technology investments—as used in leading private and public organizations we studied, DOD made systems decisions without (1) appropriately analyzing cost, benefits, and technical risks; (2) establishing realistic project schedules; or (3) considering how business process improvements could affect information technology investments. For one effort alone, DOD spent about $700 million trying to develop and implement a single system for the material management business area— but this effort proved unsuccessful. We reported in 1997 that the benefits of CIM had yet to be widely achieved after 8 years of effort and spending about $20 billion. The CIM initiative was eventually abandoned. DOD’s long-standing financial management difficulties have adversely affected the department’s ability to control costs, ensure basic accountability, anticipate future costs and claims on the budget (such as for health care, weapon systems, and environmental liabilities), measure performance, maintain funds control, prevent fraud, and address pressing management issues. In this regard, I would like to briefly highlight three of our recent products that exemplify the adverse impact of DOD’s reliance on fundamentally flawed financial management systems and processes and a weak overall internal control environment. In March of this year, we testified on the continuing problems with internal controls over approximately $64 million in fiscal year 2001 purchase card transactions involving two Navy activities. Consistent with our testimony last July on fiscal year 2000 purchase card transactions at these locations, our follow-up review demonstrated that continuing control problems contributed to fraudulent, improper, and abusive purchases and theft and misuse of government property. We are currently auditing purchase and travel card usage across the department. In July 2001, we reported that DOD did not have adequate systems, controls, and managerial attention to ensure that the $2.7 billion of adjustments affecting closed appropriation accounts made during fiscal year 2000 were legal and otherwise proper. Our review of $2.2 billion of these adjustments found about $615 million (28 percent) of the adjustments should not have been made, including about $146 million that violated specific provisions of appropriations law and were thus illegal. For example, the stated purpose of one adjustment was to charge a $79 million payment made in February 1999 to a fiscal year 1992 research and development appropriation account to correct previous payment recording errors. However, the fiscal year 1992 research and development appropriation account closed at the end of fiscal year 1998—4 months before the $79 million payment was made. Therefore, the adjustment had the same effect as using canceled funds from a closed appropriation account to make the February 1999 expenditure, which is prohibited by the 1990 law. As of April 2002, DOD had reversed 140 of the 162 transactions and provided additional contract documentation for the remaining 22 transactions. However, DOD has yet to complete the reconciliation for the contracts associated with these adjustments and make the correcting entries. DOD has indicated that it will be later this year before the correct entries are made. In June 2001, we reported that DOD’s financial systems could not adequately track and report on whether the $1.1 billion in earmarked funds that the Congress provided to DOD for spare parts and associated logistical support were actually used for their intended purpose. The vast majority of the funds—92 percent—were transferred to the military services operation and maintenance accounts. Once the funds were transferred into the operation and maintenance accounts, the department could not separately track the use of the funds. As a result, DOD lost its ability to assure the Congress that the funds it received for spare parts purchases were used for, and only for, that purpose. Problems with the department’s financial management operations go far beyond its accounting and finance systems and processes. The department continues to rely on a far-flung, complex network of finance, logistics, personnel, acquisition, and other management information systems—80 percent of which are not under the control of the DOD Comptroller—to gather the financial data needed to support the day-to-day management decision making. This network was not designed to be, but rather has evolved into, the overly complex and error-prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces that combine to exacerbate problems with data integrity. Many of the department’s business operations are mired in old, inefficient processes and legacy systems, some of which go back to the 1950s and 1960s. For example, the department still relies on the Mechanization of Contract Administration Services (MOCAS) system—which dates back to 1968—to process a substantial portion of the contract payment transactions for all DOD organizations. In fiscal year 2001, MOCAS processed an estimated $78 billion in contract payments. Past efforts to replace MOCAS have failed and the current effort has been delayed. As a result, for the foreseeable future, DOD will continue to be saddled with MOCAS. In the 1970s, we issued numerous reports detailing serious problems with the department’s financial management operations. Between 1975 and 1981, we issued more than 75 reports documenting serious problems with DOD’s cost, property, fund control, and payroll accounting systems. In the 1980s, we found that despite the billions of dollars invested in individual systems, these efforts, too, fell far short of the mark, with extensive schedule delays and cost overruns. For example, our 1989 report on eight major DOD system development efforts—including two major accounting systems—under way at that time, showed that system development cost estimates doubled, two of the eight efforts were abandoned, and the remaining six efforts experienced delays of 3 to 7 years. Two recent specific system endeavors that have fallen short of their intended goals are the Standard Procurement System and the Defense Joint Accounting System. Both of these efforts were aimed at improving the department’s financial management and related business operations. Standard Procurement System (SPS). In November 1994, DOD began the SPS program to acquire and deploy a single automated system to perform all contract management-related functions within DOD’s procurement process for all DOD organizations and activities. The laudable goal of SPS was to replace 76 existing procurement systems with a single departmental system. DOD estimated that SPS had a life-cycle cost of approximately $3 billion over a 10-year period. According to DOD, SPS was to support about 43,000 users at over 1,000 sites worldwide and was to interface with key financial management functions such as payment processing. Additionally, SPS was intended to replace the contract administration functions currently performed by MOCAS. Our July 2001 report and February 2002 testimony before this Subcommittee identified weaknesses in the department’s management of its investment in SPS. Specifically: The department had not economically justified its investment in the program because its latest (January 2000) analysis of costs and benefits was not credible. Further, this analysis showed that the system, as defined, was not a cost-beneficial investment. The department was not accumulating actual program costs and therefore, did not know the total amount spent on the program to date, yet life-cycle cost projections had grown from about $3 billion to $3.7 billion. Although the department committed to fully implementing the system by March 31, 2000, this target date had slipped by over 3 ½ years to September 30, 2003, and program officials have recently stated that this date will also not be met. We recommended that the Secretary of Defense make additional investments in SPS conditional upon first demonstrating that the existing version of SPS is producing benefits that exceed costs and that future investment decisions, including those regarding operations and maintenance beyond fiscal year 2001, be based on complete and reliable economic justifications. Defense Joint Accounting System (DJAS). In 1997, DOD selected DJAS to be one of three general fund accounting systems. As originally envisioned, DJAS would perform the accounting for the Army and the Air Force as well as the DOD transportation and security assistance areas. Subsequently, in February 1998, Defense Finance and Accounting Service (DFAS) decided that the Air Force could withdraw from using DJAS. DFAS made the decision because either the Air Force processes or the DJAS processes would need significant reengineering to allow for the development of a joint accounting system. As a result, the Air Force was allowed to start development of its own general fund accounting system— General Fund and Finance System—which resulted in the development of a fourth general fund accounting system. In June 2000, the DOD Inspector General reported that DFAS was developing DJAS at an estimated life-cycle cost of about $700 million without demonstrating that the program was the most cost-effective alternative for providing a portion of DOD’s general fund accounting. More specifically, the report stated that DFAS had not developed a complete or fully supportable feasibility study, analysis of alternatives, economic analysis, acquisition program baseline, or performance measures, and had not reengineered business processes. According to data provided by DFAS, for fiscal years 1997-2000 approximately $120 million was spent on the development and implementation of DJAS. However, today DJAS is only being operated at two locations—Ft. Benning, Georgia, and the Missile Defense Agency. According to a DFAS official, DJAS is considered to be fully deployed—which means it is operating at all intended locations. Significant resources—in terms of dollars, time, and people—have been invested in these two efforts, without demonstrated improvement in DOD’s business operations. It is essential that DOD ensure that its investment in systems modernization results in more effective and efficient business operations, since every dollar spent on ill-fated efforts such as SPS and DJAS is one less dollar available for other defense spending priorities. As part of our constructive engagement approach with DOD, the Comptroller General met with Secretary Rumsfeld last summer to provide our perspectives on the underlying causes of the problems that have impeded past reform efforts at the department and to discuss options for addressing these challenges. There are four underlying causes: a lack of sustained top-level leadership and management accountability deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and inadequate incentives for seeking change. Historically, DOD has not routinely assigned accountability for performance to specific organizations or individuals who have sufficient authority to accomplish desired goals. For example, under the CFO Act, it is the responsibility of agency CFOs to establish the mission and vision for the agency’s future financial management. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The department has learned through its efforts to meet the Year 2000 computing challenge that to be successful, major improvement initiatives must have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense. In the Year 2000 case, the then Deputy Secretary of Defense was personally and substantially involved and played a major role in the department’s success. Such top- level support and attention helps ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes. A central finding from our report on our survey of best practices of world-class financial management organizations—Boeing; Chase Manhattan Bank; General Electric; Pfizer; Hewlett-Packard; Owens Corning; and the states of Massachusetts, Texas, and Virginia— was that clear, strong executive leadership was essential to (1) making financial management an entitywide priority, (2) redefining the role of finance, (3) providing meaningful information to decision makers, and (4) building a team of people that delivers results. DOD’s past experience has suggested that top management has not had a proactive, consistent, and continuing role in building capacity, integrating daily operations for achieving performance goals, and creating incentives. Sustaining top management commitment to performance goals is a particular challenge for DOD. In the past, the average 1.7-year tenure of the department’s top political appointees has served to hinder long-term planning and follow-through. Cultural resistance to change and military service parochialism have also played a significant role in impeding previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization, and that many of these practices were developed piecemeal and evolved to accommodate different organizations, each with its own policies and procedures. For example, as discussed in our July 2000 report, the department encountered resistance to developing departmentwide solutions under the then Secretary’s broad-based DRI. In 1997, the department established a Defense Management Council—including high-level representatives from each of the military services and other senior executives in the Office of the Secretary of Defense—which was intended to serve as the “board of directors” to help break down organizational stovepipes and overcome cultural resistance to change called for under DRI. However, we found that the council’s effectiveness was impaired because members were not able to put their individual military services’ or DOD agencies’ interests aside to focus on departmentwide approaches to long-standing problems. Cultural resistance to change has impeded reforms not only in financial management, but also in other business areas, such as weapon system acquisition and inventory management. For example, as we reported last year, while the individual military services conduct considerable analyses justifying major acquisitions, these analyses can be narrowly focused and do not consider joint acquisitions with the other services. In the inventory management area, DOD’s culture has supported buying and storing multiple layers of inventory rather than managing with just the amount of stock needed. DOD’s past reform efforts have been handicapped by the lack of clear, linked goals and performance measures. As a result, DOD managers lack straightforward road maps showing how their work contributes to attaining the department’s strategic goals, and they risk operating autonomously rather than collectively. In some cases, DOD had not yet developed appropriate strategic goals, and in other cases, its strategic goals and objectives were not linked to those of the military services and defense agencies. As part of our assessment of DOD’s Fiscal Year 2000 Financial Management Improvement Plan, we reported that, for the most part, the plan represented the military services’ and defense components’ stovepiped approaches to reforming financial management and did not clearly articulate how these various efforts would collectively result in an integrated DOD-wide approach to financial management improvement. In addition, we reported that the department’s plan did not include performance measures that could be used to assess DOD’s progress in resolving its financial management problems. DOD officials have informed us that they are now working to revise the department’s approach to this plan so that future years’ updates will reflect a more strategic, departmentwide vision and provide a more effective tool for financial management reform. As it moves to modernize its systems, the department faces a formidable challenge in responding to technological advances that are changing traditional approaches to business management. For fiscal year 2003, DOD’s information technology budgetary request of approximately $26 billion will support a wide range of military operations as well as DOD business functions. As we have reported, while DOD plans to invest billions of dollars in modernizing its financial management and other business support systems, it does not yet have an overall blueprint—or enterprise architecture—in place to guide and direct these investments. As we recently testified, our review of practices at leading organizations showed they were able to make sure their business systems addressed corporate—rather than individual business unit—objectives by using enterprise architectures to guide and constrain investments. Consistent with our recommendation, DOD is now working to develop a financial management enterprise architecture, which is a very positive development. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business-as-usual” processes, systems, and structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD generally measures its performance by the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement changed behavior have been minimal or nonexistent. Secretary Rumsfeld perhaps said it best in announcing his planned transformation at DOD: “There will be real consequences from, and real resistance to, fundamental change.” This lack of incentive has perhaps been most evident in the department’s acquisition area. In DOD’s culture, the success of a manager’s career has depended more on moving programs and operations through the DOD process than on achieving better program outcomes. The fact that a given program may have cost more than estimated, taken longer to complete, and not generated results or performed as promised was secondary to fielding a new program. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide and congressional goals; (2) develop incentives that motivate decision makers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or pulling the plug on a system or program that is failing; and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource-allocation decisions. Recognizing the need for improved financial data to effectively manage the department’s vast operations, Secretary Rumsfeld commissioned an independent study to recommend a strategy for financial management improvements. The report recognized that the department would have to undergo “a radical financial management transformation” and that it would take more than a decade to achieve. The report also noted that DOD’s current financial, accounting, and feeder systems do not provide relevant, reliable, and timely information. Further, the report pointed out that the “support of management decision-making” is generally not an objective of the financially based information currently developed or planned for future development. Additionally, the report stated that although the department had numerous system projects underway, they were narrowly focused, lacked senior management leadership, and were not part of an integrated DOD-wide strategy. The report also noted that the systemic problems discussed were not strictly financial management problems and could not be solved by DOD’s financial community. Rather, the solution would require the “concerted effort and cooperation of cross-functional communities throughout the department.” The report recommended an integrated approach to transform the department’s financial operations. The report noted that its proposed framework would take advantage of certain ongoing improvement actions within the department and provide specific direction for a more coordinated, managed, and results-oriented approach. The proposed course of action for transforming the department’s financial management centered around six broad elements: (1) leadership, (2) incentives, (3) accountability, (4) organizational alignment, (5) changes in certain rules, and (6) changes in enterprise practices. The report referred to its approach as a “twin-track” course of action. The first track employs a DOD-wide management approach to developing standard integrated systems; obtaining relevant, reliable, and timely financial data; and providing incentives for the department to utilize financial data in an efficient and effective way. This track will require a longer time frame and will include establishing a centralized oversight process under the DOD Comptroller for incrementally implementing the recommended structural changes and developing standard, integrated financial systems. The second track focuses on targeting, selecting, and overseeing implementation of a limited number of intraservice/cross-service projects for major cost savings or other high-value benefits under a process led by the DOD Comptroller and assisting the Secretary of Defense in establishing and managing a set of metrics. Prime tools of such improvements would include activity-based costing and benchmarking/best practices analysis to identify cost-saving opportunities. A July 19, 2001, departmental memorandum from Secretary Rumsfeld confirmed that the department needs to develop and implement an architecture for achieving integrated financial and accounting systems in order to generate relevant, reliable, and timely information on a routine basis. Secretary Rumsfeld further reiterated the need for a fundamental transformation of DOD in his “top-down” Quadrennial Defense Review. Specifically, his September 30, 2001, Quadrennial Defense Review Report concluded that the department must transform its outdated support structure, including decades-old financial systems that are not well interconnected. The report summed up the challenge well in stating: “While America’s businesses have streamlined and adopted new business models to react to fast-moving changes in markets and technologies, the Defense Department has lagged behind without an overarching strategy to improve its business practices.” Our experience has shown there are several key elements that collectively would enable the department to effectively address the underlying causes of its inability to resolve its long-standing financial management problems. For the most part these elements are consistent with those discussed in the department’s April 2001 financial management transformation report. These elements, which we believe are key to any successful approach to financial management reform, include addressing the department’s financial management challenges as part of a comprehensive, integrated, DOD-wide business reform; providing for sustained leadership by the Secretary of Defense and resource control to implement needed financial management reforms; establishing clear lines of responsibility, authority, and accountability for such reform tied to the Secretary; incorporating results-oriented performance measures and monitoring tied to financial management reforms; providing appropriate incentives or consequences for action or inaction; establishing and implementing an enterprise architecture to guide and direct financial management modernization investments; and ensuring effective oversight and monitoring. Actions on many of the key areas central to successfully achieving desired financial management and related business transformation goals— particularly those that rely on longer term systems improvements—will take a number of years to fully implement. Secretary Rumsfeld has estimated that his envisioned transformation may take 8 or more years to complete. Our research and experience with other federal agencies have shown that this is not an unrealistic estimate. Additionally, these keys should not be viewed as independent actions, but rather, a set of interrelated and interdependent actions that are collectively critical to transforming DOD’s business operations. Consequently, both long-term actions focused on the Secretary’s envisioned business transformation and short-term actions focused on improvements within existing systems and processes will be critical going forward. Short- term actions in particular will be critical if the department is to achieve the greatest possible accountability over existing resources and more reliable data for day-to-day decision making while longer term systems and business process reengineering efforts are under way. Beginning with the Secretary’s recognition of a need for a fundamental transformation of the department’s business operations and building on some of the work begun under past administrations, DOD has taken a number of positive steps in many of these key areas, but these steps are only a beginning. Challenges remain in each of these key areas that are formidable. As we previously reported, establishing the right goal is essential for success. Central to effectively addressing DOD’s financial management problems will be the recognition that they cannot be addressed in an isolated, stovepiped, or piecemeal fashion separate from the other high-risk areas facing the department. Further, successfully reforming the department’s operations—which consist of people, business processes, and technology—will be critical if DOD is to effectively address the deep- rooted organizational emphasis on maintaining business-as-usual across the department. Financial management is a crosscutting issue that affects virtually all of DOD’s business areas. For example, improving its financial management operations so that they can produce timely, reliable, and useful cost information is essential to effectively measure its progress toward achieving many key outcomes and goals across virtually the entire spectrum of DOD’s business operations. At the same time, the department’s financial management problems—and, most importantly, the keys to their resolution—are deeply rooted in and dependent upon developing solutions to a wide variety of management problems across DOD’s various organizations and business areas. For example, we have reported that many of DOD’s financial management shortcomings were attributable in part to human capital issues. The department does not yet have a strategy in place for improving its financial management human capital. This is especially critical in connection with DOD’s civilian workforce, since DOD has generally done a much better job in conjunction with human capital planning for its military personnel. In addition, DOD’s civilian personnel face a variety of size, shape, skills, and succession- planning challenges that need to be addressed. As we mentioned earlier, and it bears repetition, the department has reported that an estimated 80 percent of the data needed for sound financial management comes from its other business operations, such as its acquisition and logistics communities. DOD’s vast array of costly, nonintegrated, duplicative, and inefficient financial management systems is reflective of its lack of an integrated approach to addressing management challenges. DOD has acknowledged that one of the reasons for the lack of clarity in its reporting under the Government Performance and Results Act has been that most of the program outcomes the department is striving to achieve are interrelated, while its management systems are not integrated. As we discussed previously, the Secretary of Defense has made the fundamental transformation of business practices throughout the department a top priority. In this context, the Secretary established a number of top-level committees, councils, and boards, including the Senior Executive Committee, Business Initiative Council, and Defense Business Practices Implementation Board. The Senior Executive Committee was established to help guide efforts across the department to improve its business practices. This committee—chaired by the Secretary of Defense, and with membership to include the Deputy Secretary, the military service secretaries, and the Under Secretary of Defense for Acquisition, Technology and Logistics—was established to function as the “board of directors” for the department. The Business Initiative Council— comprising the military service secretaries and headed by the Under Secretary of Defense for Acquisition, Technology and Logistics—was established to encourage the military services to explore new money- saving business practices to help offset funding requirements for transformation and other initiatives. Our research of successful public and private sector organizations shows that such entities, comprised of enterprisewide executive leadership, provide valuable guidance and direction when pursuing integrated solutions to corporate problems. Inclusion of the department’s top leadership should help to break down the cultural barriers to change and result in an integrated DOD approach for business reform. The department’s successful Year 2000 effort illustrated, and our survey of leading financial management organizations captured, the importance of strong leadership from top management. As we have stated many times before, strong, sustained executive leadership is critical to changing a deeply rooted corporate culture—such as the existing “business-as-usual” culture at DOD—and to successfully implementing financial management reform. In the case of the Year 2000 challenge the personal, active involvement of the Deputy Secretary of Defense played a key role in building entitywide support and focus. Given the long-standing and deeply entrenched nature of the department’s financial management problems— combined with the numerous competing DOD organizations, each operating with varying, often parochial views and incentives—such visible, sustained top-level leadership will be critical. In discussing their April 2001 report to the Secretary of Defense on transforming financial management, the authors stated that, “unlike previous failed attempts to improve DOD’s financial practices, there is a new push by DOD leadership to make this issue a priority.” Strong, sustained executive leadership—over a number of years and administrations—will be key to changing a deeply rooted culture. In addition, given that significant investments in information systems and related processes have historically occurred in a largely decentralized manner throughout the department, additional actions will likely be required to implement centralized information technology investment control. In our May 2001 report we recommended that DOD take action to provide senior departmental commitment and leadership through establishment of a enterprisewide steering committee sponsored by the Secretary, that could guide development of a transformation blueprint and provide for centralized control over investments to ensure funding is provided for only those proposed investments in systems and business reforms that are consistent with the blueprint. Absent such a control, DOD runs the serious risk that the fiscal year 2003 information technology budgetary request of approximately $26 billion and future years’ requests will not result in marked improvement in DOD’s business operations. Without such an approach, DOD runs the risk of spending billions of dollars on systems modernization, which continues to perpetuate the existing systems environment that suffers from duplication of systems, limited interoperability, and unnecessarily costly operations and maintenance and will preclude DOD from achieving the Secretary’s vision of improved financial information on the results of departmental operations. Additionally, as previously discussed, the tenure of the department’s top political appointees has generally been short in duration and as a result it is sometimes difficult to maintain the focus and momentum that is needed to resolve the management challenges facing DOD. The resolution of the array of interrelated business system management challenges previously discussed is likely to require a number of years and therefore span several administrations. The Comptroller General has proposed in congressional testimony that one option to consider to address the continuity issue would be the establishment of the position of chief operating officer. This position could be filled by an individual appointed for a set term of 5 to 7 years with the potential for reappointment. Such an individual should have a proven track record as a business process change agent for large, diverse organizations and would spearhead business process transformation across the department. Last summer, when the Comptroller General met with Secretary Rumsfeld, he stressed the importance of establishing clear lines of responsibility, decision-making authority, and resource control for actions across the department tied to the Secretary as a key to reform. As we previously reported, such an accountability structure should emanate from the highest levels and include the secretary of each of the military services as well as heads of the department’s various major business areas. The Secretary of Defense has taken action to vest responsibility and accountability for financial management modernization with the DOD Comptroller. In October 2001, the DOD Comptroller, as previously mentioned, established the Financial Management Modernization Executive and Steering Committees as the governing bodies that oversee the activities related to this modernization effort and also established a supporting working group to provide day-to-day guidance and direction in these efforts. DOD reports that the executive and steering committees met for the first time in January 2002. At the request of the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, we are initiating a review of the department’s efforts to develop and implement an enterprise architecture. As part of the effort, we will be assessing the department’s efforts to align current investments in financial systems with the proposed architecture. It is clear to us that the DOD Comptroller has the full support of the Secretary and that the Secretary is committed to making meaningful change. The key is to translate this support into a funding control mechanism that ensures DOD’s components information technology investments are aligned with the department’s strategic blueprint. Addressing issues such as centralization of authority for information systems investments and continuity of leadership is critical to successful business transformation. To make this work, it is important that the DOD Comptroller have sufficient authority to oversee the investment decisions in order to bring about the full, effective participation of the military services and business process owners across the department. As discussed in our January 2001 report on DOD’s major performance and accountability challenges, establishing a results orientation is another key element of any approach to reform. Such an orientation should draw upon results that could be achieved through commercial best practices, including outsourcing and shared servicing concepts. Personnel throughout the department must share the common goal of establishing financial management operations that not only produce financial statements that can withstand the test of an audit but more importantly, routinely generate useful, reliable, and timely financial information for day- to-day management purposes. In addition, we have previously testified that DOD’s financial management improvement efforts should be measured against an overall goal of effectively supporting DOD’s basic business processes, including appropriately considering related business process system interrelationships, rather than determining system-by-system compliance. Such a results-oriented focus is also consistent with an important lesson learned from the department’s Year 2000 experience. DOD’s initial Year 2000 focus was geared toward ensuring compliance on a system-by-system basis and did not appropriately consider the interrelationships of systems and business areas across the department. It was not until the department, under the direction of the then Deputy Secretary, shifted to a core mission and function review approach that it was able to achieve the desired result of greatly reducing its Year 2000 risk. Since the Secretary has established an overall business process transformation goal that will require a number of years to achieve, going forward it is especially critical for managers throughout the department to focus on specific metrics that, over time, collectively will translate to achieving this overall goal. It is important for the department to refocus its annual accountability reporting on this overall goal of fundamentally transforming the department’s financial management systems and related business processes to include appropriate interim annual measures for tracking progress toward this goal. In the short term, it is important to focus on actions that can be taken using existing systems and processes. It is critical to establish interim measures to both track performance against the department’s overall transformation goals and facilitate near-term successes using existing systems and processes. The department has established an initial set of metrics intended to evaluate financial performance, and it reports that it has seen improvements. For example, with respect to closed appropriation accounts, DOD reported during the first 6 months of fiscal year 2002 a reduction in the dollar value of adjustments to closed appropriation accounts of about 80 percent from the same 6-month period in fiscal year 2001. Other existing metrics concern cash and funds management, contract and vendor payments, and disbursement accounting. We are initiating a review of DOD’s short-term financial management performance metrics and will provide the Subcommittee the results of our review. DOD also reported that it is working to develop these metrics into higher-level measures more appropriate for senior management. We agree with the department’s efforts to expand the use of appropriate metrics to guide its financial management reform efforts. Another key to breaking down the parochial interests and stovepiped approaches that have plagued previous reform efforts is establishing mechanisms to reward organizations and individuals for behaviors that comply with DOD-wide and congressional goals. Such mechanisms should be geared to providing appropriate incentives and penalties to motivate decision makers to initiate and implement efforts that result in fundamentally reformed financial management and other business support operations. In addition, such incentives and consequences are essential if DOD is to break down the parochial interests that have plagued previous reform efforts. Incentives driving traditional ways of doing business, for example, must be changed, and cultural resistance to new approaches must be overcome. Simply put, DOD must convince people throughout the department that they must change from business-as-usual systems and practices or they are likely to face serious consequences, organizationally and personally. Enterprise architecture development, implementation, and maintenance are a basic tenet of effective information technology management. Used in concert with other information technology management controls, an architecture can increase the chances for optimal mission performance. We have found that attempting to modernize operations and systems without an architecture leads to operational and systems duplication, lack of integration, and unnecessary expense. Our best practices research of successful public and private sector organizations has similarly identified enterprise architectures as essential to effective business and technology transformation. Establishing and implementing a financial management enterprise architecture is essential for the department to effectively manage its modernization effort. The Clinger-Cohen Act requires major departments and agencies to develop, implement, and maintain an integrated architecture. As we previously reported, such an architecture can help ensure that the department invests only in integrated, business system solutions and, conversely, will help move resources away from non-value- added legacy business systems and nonintegrated business system development efforts. Without an enterprise architecture to guide and constrain information technology investments, DOD runs the serious risk that its system efforts will perpetuate the existing system environment that suffers from systems duplication, limited interoperability, and unnecessarily costly operations and maintenance. In our May 2001 report, we pointed out that DOD lacks a financial management enterprise architecture to guide and constrain the billions of dollars it plans to spend to modernize its financial management operations and systems. According, we recommended that the department develop and implement an architecture in accordance with DOD’s policies and guidance and that senior management be involved in the investment decision-making process. DOD has awarded a contract for the development of a DOD-wide financial management enterprise architecture to “achieve the Secretary’s vision of relevant, reliable and timely financial information needed to support informed decision-making.” In fiscal year 2002, DOD received approximately $98 million and has requested another $96 million for fiscal year 2003 for this effort. Consistent with the recommendations contained in our January 1999 and May 2001 reports, DOD has begun an extensive effort to document the department’s current “as-is” financial management architecture by identifying systems currently relied upon to carry out financial management operations throughout the department. To date, the department has identified over 1,100 systems that are involved in the processing of financial information. In developing the “as-is” environment DOD has recognized that financial management is broader than just accounting and finance systems. Rather, it includes the department’s budget formulation, acquisition, inventory management, logistics, personnel, and property management systems. In developing and implementing its enterprise architecture, DOD needs to ensure that the multitude of systems efforts currently underway are designed as an integral part of the architecture. As discussed in our May 2001 report, the Army and the Defense Logistics Agency (DLA) are investing in financial management solutions that are estimated to cost about $700 million and $900 million, respectively. Further, the Naval Audit Service has reported that the Navy has efforts underway which are estimated to cost about $2.5 billion. These programs—commercial enterprise resource planning (ERP) products—are intended to implement different commercially available products for automating and reengineering various operations within the organization. Among the functions that these ERP programs address is financial management. However, since DOD has yet to develop and implement its architecture, there is no assurance that these separate efforts will result in systems that are compatible with the DOD designated architecture. For example, the Naval Audit Service reported that there are interoperability problems with the four Navy ERP efforts and the entire program lacks appropriate management oversight. The effort to develop a financial management architecture will be further complicated as the department strives to develop multiple architectures across its various business areas and organizational components. For example, in June 2001, we recommended that the DLA develop an architecture to guide and constrain its Business Systems Modernization acquisition. Additionally, we recommended that the department develop a DOD-wide logistics management architecture that would promote interoperability and avoid duplication among the logistics modernization efforts now under way in DOD component organizations, such as DLA and the military services. As previously discussed, control and accountability over investments are critical. DOD can ill-afford another CIM, which invested billion of dollars but did not result in systems that were capable of providing DOD management and the Congress with more accurate, timely, and reliable information of the results of the department’s vast operations. To better control DOD’s investments we recommended in our May 2001 report, that until the architecture is developed investments should be limited to (1) deployment of systems that have already been fully tested and involve no additional development or acquisition cost, (2) stay-in-business maintenance needed to keep existing systems operational, (3) management controls needed to effectively invest in modernized systems, and (4) new systems or existing system changes that are congressionally directed or are relatively small, cost effective, and low risk and can be delivered in a relatively short time frame. Ensuring effective monitoring and oversight of progress will also be key to bringing about effective implementation of the department’s financial management and related business process reform. We have previously testified that periodic reporting of status information to department top management, the Office of Management and Budget (OMB), the Congress, and the audit community is another key lesson learned from the department’s successful effort to address its Year 2000 challenge. Previous submissions of the department’s Financial Management Improvement Plan have simply been compilations of data call information on the stovepiped approaches to financial management improvements received from the various DOD components. It is our understanding that DOD plans to change its approach and anchor the plan in the enterprise architecture. If the department’s future plans are upgraded to provide a departmentwide strategic view of the financial management challenges facing the department, along with planned corrective actions, these plans can serve as an effective tool not only to help guide and direct the department’s financial management reform efforts, but also to help maintain oversight of the department’s financial management operations. Going forward, this Subcommittee’s oversight hearings, as well as the active interest and involvement of the defense appropriations and authorization committees, will continue to be key to effectively achieving and sustaining DOD’s financial management and related business process reform milestones and goals. | The Department of Defense (DOD) faces complex financial and management problems that are deeply rooted in DOD's business operations and management culture. During the past 12 years, DOD has begun several broad-based departmentwide reform efforts to overhaul its financial operations and other key business areas. These efforts have been unsuccessful. GAO identified several key elements that are essential to the success of any DOD financial management reform effort. These include (1) addressing the department's financial management challenges as part of a comprehensive, integrated, DOD-wide business reform; (2) providing for sustained leadership and resource control to implement needed reforms; (3) establishing clear lines of responsibility, authority, and accountability for such reform; (4) incorporating results-oriented performance measures and monitoring tied to the reforms; (5) providing appropriate incentives or consequences for action or inaction; (6) establishing and implementing an enterprise architecture to guide and direct financial management and modernization investments, and (7) ensuring effective oversight and monitoring. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD Instruction 5100.73, Major DOD Headquarters Activities, establishes a system to identify and manage the number and size of major DOD headquarters activities. The instruction also provides an approved list of major DOD headquarters activities, including the Offices of the Secretary of the Army and Army Staff; the Office of the Secretary of the Navy and Office of the Chief of Naval Operations; Headquarters, Marine Corps; and the Offices of the Secretary of the Air Force and Air Staff. All personnel working within these headquarters organizations are considered to be performing major headquarters activities functions. According to the instruction, other headquarters organizations include portions of the defense agencies, DOD field activities, and the combatant commands, along with their subordinate unified commands and respective service component commands. For example, according to DOD Instruction 5100.73, only personnel performing major headquarters activities functions in the Defense Information Systems Agency and Air Combat Command’s Intelligence Squadron would be considered headquarters, while personnel performing other functions would be excluded. Several DOD organizations have responsibilities related to major DOD headquarters activities, including those summarized below. The Office of the Deputy Chief Management Officer (ODCMO) is responsible for ensuring that DOD components are accurately identifying and accounting for major DOD headquarters activities, according to criteria established in DOD Instruction 5100.73. In addition, the Deputy Chief Management Officer has primary responsibility set forth under department guidance related to improving the efficiency and effectiveness of operations across DOD’s business functions, and is authorized by the Chief Management Officer to act as the Principal Staff Assistant to issue policy and guidance regarding matters relating to the management and improvement of DOD business operations. This has included responsibilities related to identifying and monitoring implementation of cost savings opportunities and efficiencies across DOD’s business areas. The Under Secretary of Defense for Personnel and Readiness, according to DOD Instruction 5100.73, is responsible for reviewing and issuing guidance over, and consolidating changes in, manpower authorizations and personnel levels for major DOD headquarters activities, among other things. In addition to these responsibilities, the Under Secretary of Defense for Personnel and Readiness also compiles the annual Defense Manpower Requirements Report, which provides DOD’s manpower requirements, to include manpower assigned to major headquarters activities, as reflected in the President’s budget request for the current fiscal year. The Under Secretary of Defense for Personnel and Readiness is also responsible for developing an annual guide for DOD components to use when compiling their IGCA Inventory submissions in response to statutory and regulatory reporting requirements. In addition, the Under Secretary of Defense for Personnel and Readiness shares responsibility—with the Under Secretary of Defense for Acquisition, Technology and Logistics and the Office of the Under Secretary of Defense (Comptroller)—for issuing guidance for compiling and reviewing the Inventory of Contracted Services. The Under Secretary of Defense for Personnel and Readiness in particular compiles the inventories prepared by the components. The heads of DOD components, including the Secretaries of the military departments, the Chairman of the Joint Chiefs of Staff, and the heads of other DOD components have responsibility, according to this instruction, for maintaining a management information system that identifies the number of personnel and total operating costs of major DOD headquarters activities, and reporting on these data to the Under Secretary of Defense (Comptroller). Since 2010, DOD has recognized that it must reduce the cost of doing business, including reducing the rate of growth in personnel costs and finding further efficiencies in overhead and headquarters, in its business practices, and in other support activities. Therefore, the department has pursued headquarters-related reduction efforts in recent years to realize cost savings. See appendix IV for additional details on these efforts. Since 2014, and in part to respond to congressional direction, DOD has undertaken initiatives intended to improve the efficiency of its business processes, which include headquarters organizations, and identify related cost savings, but it is unclear to what extent these initiatives will help the department achieve the savings it has identified. In May 2015, DOD concluded its Core Business Process Review, which was intended to apply lessons learned and information technology approaches from the commercial sector to the department’s six core business processes— management of human resources, healthcare, financial flow, acquisition and procurement, logistics and supply, and real property—in order to save money and resources while improving mission performance. Through this review, ODCMO identified $62 billion to $84 billion in potential cumulative savings opportunities across the six business processes for fiscal years 2016 through 2020. The review identified that these potential savings opportunities could be achieved through civilian personnel attrition and retirements to occur without replacements over the next 5 years, matching labor productivity in comparable industries or sectors, and improving core processes such as rationalizing organizational structures to reduce excessive layers, optimizing contracts, and using information technology to eliminate or reduce manual processes. According to ODCMO officials, DOD ultimately concluded that these potential savings opportunities could not entirely be achieved through these means. Nevertheless, ODCMO officials noted that DOD is already engaging in initiatives that, in effect, address the opportunities highlighted by the Core Business Process Review. The four department-led initiatives we reviewed that include headquarters organizations are concurrent and have varied scopes. Two of the initiatives are focused on OSD and its related organizations—one of these initiatives consists of a series of business process and systems reviews and the other initiative is a review focused on reducing the number of layers in OSD. A third initiative is focused specifically on contracted services requirements for DOD organizations outside the military departments known as the Fourth Estate. Finally, the fourth initiative—the review of the organization and responsibilities of DOD—is focused on updating or adjusting organizational relationships and authorities across the entire department, with a final report to possibly be issued later in 2016. The four initiatives were not completed, or their results were not available, in time for us to assess their effect, and therefore it is unclear to what extent they will contribute toward the savings identified by the Core Business Process Review. The initiatives are described in more detail below. In August 2014, DOD announced Business Process and Systems Reviews (BPSR), which, according to BPSR implementation guidance, are intended to review business processes and the supporting information technology systems within selected organizations in OSD and associated defense agencies and DOD field activities. The purpose of these BPSRs is to provide senior officials with information to clarify whether their organizations are aimed at departmental outcomes, to identify resources allocated to outcomes and any obstacles to achieving those outcomes, and to identify activities that might be improved or eliminated. As of April 2016, DOD had completed BPSRs for five of nine organizations. In some cases, organizations have taken steps to implement potential improvement and savings opportunities identified by the BPSRs. For example, as a result of a review of the ODCMO, the Deputy Secretary of Defense approved the implementation of a single service provider for the Pentagon’s information technology operations in May 2015. In other cases, it is unclear whether organizations have begun taking steps to implement the opportunities identified by the BPSR reviews. For example, the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment identified a potential opportunity to reduce military construction costs by up to 3 percent through revisions to antiterrorism standards for DOD-owned buildings, but noted that this potential opportunity must first be subject to thorough analysis to fully appreciate its validity and return on investment. The department is currently working to complete BPSR reviews for four other organizations. According to ODCMO officials, DOD may conduct more BPSRs in the future but currently has no specific plans to do so once these four are completed. In July 2015, DOD announced an effort to reduce layers of management and staff—known as delayering—in the management structure of OSD and associated defense agencies and DOD field activities. According to OSD officials and DOD’s fiscal year 2017 budget request, the department intends to use this review to help respond to certain provisions in the National Defense Authorization Act for Fiscal Year 2016, namely, the 25 percent reduction to the headquarters baseline amount by fiscal year 2020 and the $10 billion in cost savings from headquarters, administrative, and support activities by fiscal year 2019. For this effort to reduce OSD organizational layers, the ODCMO has directed these organizations, with the support of an ODCMO team, to rationalize organizational layers and supervisory spans of control, as well as to identify redundant and obsolete workload and capture potential cost savings. ODCMO’s guidance to the organizations conducting the delayering reviews recommends, among other things, that the number of organizational layers in OSD should not be more than five, and that the capabilities and functional areas that have been historically assigned to an OSD organization will remain within the same organization, unless a functional assessment allows an opportunity for cross-organizational partnership and shared work activities. According to officials from ODCMO and the Office of the Under Secretary of Defense for Personnel and Readiness, the organizations have identified the civilian positions they intend to eliminate or restructure as part of the initiative. However, the results of the initiative are not yet publicly available. The Deputy Chief Management Officer stated that the department would issue a report, at an unspecified time, that will include the cost savings identified by this OSD Organizational Delayering initiative. According to the department’s budget request for fiscal year 2017, the objective of this OSD delayering review is to achieve $1.5 billion in cost savings from fiscal year 2018 through fiscal year 2021. Also in July 2015, DOD announced that it would seek to improve the outcomes of contracted services through standardized processes and governance structures. This initiative is intended, according to OSD officials, to help the department achieve the 25 percent headquarters reduction and the $10 billion in headquarters-related cost savings, which were directed by the National Defense Authorization Act for Fiscal Year 2016. In December 2015, the Deputy Chief Management Officer directed Fourth Estate organizations to convene internal review boards known as Services Requirements Review Boards to review their requirements for contracted services. These boards, which DOD has implemented for OSD, the defense agencies, and DOD field activities, are intended to assess every service contract within these organizations that is worth $10 million or more to determine whether a valid requirement for that contract remains or whether the funds could be better employed elsewhere within the same organization. The results of these reviews are then considered by DOD leadership using a senior review panel, comprising the Deputy Chief Management Officer, the Principal Deputy Under Secretary of Defense for Acquisition, Technology and Logistics, and the Principal Staff Assistant for the organization being reviewed. In March 2016, the Deputy Chief Management Officer reported that the objective of this effort would be to achieve savings of at least 5 percent in spending on such contracts, but did not specify the baseline from which the 5 percent would be measured. According to the department’s budget request for fiscal year 2017, DOD expects to realize savings through this initiative of $1.9 billion in direct appropriations by 2021 within OSD, the defense agencies, and DOD field activities, and additional savings in working capital-funded entities. The Deputy Chief Management Officer also stated that the department would issue a single report that will include the cost savings identified by the Services Requirements Review Board, as well as the OSD Organizational Delayering initiative, but did not specify a time frame for doing so. In January 2016, the Deputy Secretary of Defense noted that the Secretary of Defense, as part of his institutional reform agenda, directed the Deputy Chief Management Officer and the Director for Joint Force Development (J7) to lead a review of organizations and responsibilities of the DOD. The objective of this review is to make recommendations for updates or adjustments to organizational relationships and authorities, based on the department’s experiences operating under the Goldwater- Nichols Department of Defense Reorganization Act of 1986. The department intends to use this review, according to ODCMO officials, to address the provision in the National Defense Authorization Act for Fiscal Year 2016 that requires DOD to conduct a comprehensive review of headquarters and administrative and support activities for purposes of consolidating and streamlining headquarters functions. To conduct the review, ODCMO officials stated that the department has developed five working groups, led jointly by OSD and Joint Staff officials, with each working group addressing a different topic: optimization of command and control relationships to meet current and future security challenges; improved coordination and elimination of overlaps between OSD and the Joint Staff; the possible establishment of U.S. Cyber Command as a unified combatant command; potential improvements to the requirements and acquisition decision-making processes; and increased flexibility in law and policy governing joint duty qualifications. In addition, as part of this review of DOD’s organization and responsibilities, the military departments have established their own working groups to assess the structures of their respective secretariats and staffs to identify potential improvements. According to ODCMO officials, most of the working groups planned to complete their reviews and brief the Secretary of Defense by March 2016. The results of these reviews were not available at the time of our review. However, in a speech in April 2016, the Secretary of Defense provided an overview of some preliminary recommendations that may result from this review, such as clarification to the role of the Chairman of the Joint Chiefs of Staff, changes to joint personnel management, and adapting combatant commands to new functions. According to ODCMO officials, the department may issue a report with findings and recommendations on the overall review later in 2016. DOD has taken steps to improve some available data on headquarters organizations, but does not have reliable data for assessing headquarters functions and associated costs. Consistent with a past GAO recommendation, DOD published a new framework describing major headquarters organizations and stated that it has established a new definition of major DOD headquarters activities (although the department has not yet updated its headquarters instruction to reflect this definition). In addition, DOD is working to identify which organizations or portions of organizations meet a new definition of major DOD headquarters activities that was included in the National Defense Authorization Act for Fiscal Year 2016, and intends to revise its headquarters instruction upon completion of this effort. Finally, the department plans to update a key resource database, the Future Years Defense Program (FYDP), to improve visibility of headquarters resources. However, the one department-wide data set that identifies specific DOD headquarters functions contains unreliable data because the department has not aligned these data with its definition of major headquarters activities, nor does it have plans to collect information on the costs associated with functions within headquarters organizations. In 2015, the department began an effort to improve some available headquarters data, which addresses a fundamental problem that our prior reports have cited and DOD has acknowledged as a longstanding challenge. Specifically, in August 2015, DOD published a framework describing the major headquarters activities and stated that it has established a new definition for its major DOD headquarters, although the department has not yet updated its guiding instruction on headquarters to reflect this new definition. The National Defense Authorization Act for Fiscal Year 2016 was enacted in November 2015 and included a revised definition of major DOD headquarters activities. Since that time, according to ODCMO officials, DOD has been working to determine which organizations or portions of organizations meet the new definition in the act in order to establish a more accurate headquarters baseline. In March 2016, the Deputy Chief Management Officer reported that the department plans to complete this effort by June 2016, thereby institutionalizing an authoritative headquarters baseline for purposes of reporting and tracking. At this time, the department also plans to update its guiding instruction on headquarters with the new definition. According to ODCMO officials, tracking would include revising the headquarters- related coding of program elements in its key resource database—the FYDP—to ensure they are appropriately designated as headquarters according to the new definition, and, where necessary, to break down these program element codes into headquarters and nonheadquarters components to better reflect allocation of headquarters resources. According to DOD officials, they have begun updating the resource coding in the FYDP and plan to complete this effort by late 2016. The re-baselining effort took on increased urgency when, in August 2015, the Deputy Secretary of Defense announced a new 25 percent cost- reduction target for major DOD headquarters activities (the military departments, OSD staff, the Joint Staff, defense agencies, DOD field activities, and combatant commands) in anticipation of a congressional mandate for additional reductions. In addition, the National Defense Authorization Act for Fiscal Year 2016 allows documented savings achieved pursuant to this 25 percent headquarters reduction to be counted toward another of the act’s requirements, namely, that the Secretary of Defense implement a plan to ensure the department achieves not less than $10 billion in cost savings from the headquarters, administrative, and support activities of the department by fiscal year 2019. According to ODCMO officials, DOD plans to meet this $10 billion savings requirement by identifying existing efficiency initiatives whose savings will be applied toward the savings total. For example, ODCMO officials stated that the ODCMO will apply the savings that were identified through an information technology consolidation initiative, through its OSD Organizational Delayering initiative, as well as through its efforts to streamline contracted services by means of the Services Requirements Review Board. In March 2016, the DCMO provided an interim response to Congress identifying that the fiscal year 2017 President’s Budget included $7.8 billion in new efficiencies over the next 5 years, but did not provide more specific information on when and from where in the budget these efficiencies would be realized or how the department would apply them to the $10 billion savings required by Congress. ODCMO’s interim response stated that the department will issue a report that provides a breakdown of the $10 billion cost savings by year, but did not provide a time frame for doing so. Part of the reason that DOD must undertake concurrent reviews and studies to achieve efficiencies is that the department does not have reliable data in two main areas. First, available DOD-wide data sources on headquarters functions are not aligned with the department-wide definition of headquarters. We attempted to conduct an independent review to assess headquarters functions, and we considered several department-wide data sources but found limitations in each. Second, DOD’s data on headquarters functions do not include information on costs associated with functions within headquarters organizations, nor, according to OSD officials, does the department have plans to collect such information. According to federal standards for internal control, an agency must have relevant, reliable, and timely information to run and control its operations. This information is required to make operating decisions, monitor performance, and allocate resources, among other things. The lack of reliable data may hinder DOD’s ability to conduct a comprehensive review for purposes of consolidating and streamlining headquarters functions, among other things, as DOD was directed to do in the National Defense Authorization Act for Fiscal Year 2016. According to OSD officials, although DOD has several sources to organize and categorize its workforce, only one department-wide data set, known as the Inherently Governmental / Commercial Activities (IGCA) Inventory, identifies specific DOD headquarters functions in the form of authorized military and civilian positions. In the IGCA Inventory, each DOD position is assigned a function based on the type of work performed, and 38 of these 306 functions are headquarters-related. Examples of such headquarters-related functions include Operation Planning and Control, Military Education and Training, and Systems Acquisition. Navy guidance specifically notes that the IGCA Inventory may be used as a total force shaping tool and a starting point for future manpower reviews or initiatives. For an example of the type of information that reliable data on headquarters functions could produce, see appendix V, which provides our analysis of the headquarters functions with the highest number of positions for each military service and Fourth Estate component in fiscal year 2014. However, we found that because the data in this data set were not aligned with headquarters definitions, they were not sufficiently reliable to assess these functions. IGCA Inventory guidance calls for components to assign headquarters-related DOD function codes to positions based on a headquarters definition that, while derived from DOD Instruction 5100.73, does not include all elements of the definition in that instruction. As a result, we found that the data on the number and functions of DOD’s military and civilian headquarters positions have varying levels of accuracy. For example, in fiscal year 2014, only 79 percent of authorized positions in OSD were considered headquarters within the IGCA Inventory, even though OSD is considered a headquarters organization in its entirety under both the definition provided in DOD Instruction 5100.73 and the new definition. Officials from all four military services informed us that, from fiscal year 2010 through fiscal year 2014, they discovered some positions that had been incorrectly coded as headquarters and undertook varying efforts to correct them. As a result, we have more confidence in data presented in the later years of the 2010 to 2014 period we reviewed, but data limitations in the earlier years covered by our review precluded us from assessing trends of these functions over time. While service officials told us they had taken steps to improve consistency of the headquarters-related DOD function codes in the IGCA Inventory, DOD does not have plans to update the data set to ensure that the headquarters-related DOD function codes in the IGCA Inventory are also consistent with the new headquarters definition. According to OSD officials, they have no plans to do so because the IGCA Inventory is not the department’s authoritative source for headquarters data. However, DOD and service officials have noted that, over time, officials have inconsistently interpreted what should be counted as headquarters according to the instruction, resulting in varying counts of headquarters positions depending on the source of the data. For example, in its Fiscal Year 2015 Defense Manpower Requirements Report, DOD included an estimate for fiscal year 2014 of 108,073 headquarters positions across the department, that is, OSD, the military services, the Joint Staff and combatant command headquarters, and the defense agencies and DOD field activities. In contrast, for these same organizations, DOD reported a total of 74,221 headquarters positions in its IGCA Inventory for fiscal year 2014 and a total of 61,046 headquarters positions in a May 2015 headquarters-related report, known as the Section 904 report. We believe that alignment of data sets containing headquarters-related codes, such as the IGCA Inventory, with the department-wide headquarters definition will provide senior DOD officials with the relevant, reliable, and timely information they need to make operating decisions, monitor performance, and allocate resources. Without alignment of data on department-wide military and civilian positions that have headquarters- related DOD function codes with the authoritative, revised definition of major DOD headquarters activities, the department will not have reliable data to enable senior officials to accurately assess headquarters functions, target specific functional areas for further analysis, or identify potential streamlining opportunities. ODCMO officials stated that, once they have finalized the headquarters definition, they plan to complete an effort to improve the accuracy of the resource levels attached to headquarters organizations by ensuring that organizations are appropriately designated as headquarters in the FYDP and, as needed, breaking these organizations down into smaller headquarters and nonheadquarters program element codes. However, these actions will not provide reliable information on the costs associated with the various functions within those headquarters organizations. According to ODCMO officials, the department does not have plans to collect such information because it believes that improving the accuracy of the resources associated with headquarters organizations will be sufficient to support any future DOD assessments of headquarters. We believe, however, that, detailed information that provides visibility into the costs associated with functions within headquarters organizations would better facilitate identification of opportunities for consolidation or elimination across organizational boundaries. Moreover, the defense committees have previously noted that, to achieve significant savings, the department must focus on consolidating and eliminating organizations and personnel that perform similar functions and missions. Army officials have also noted that being able to track the Army’s manpower by function could be useful to understand cost drivers in the budget, and could provide a starting point to help them determine the best application of structure and manpower. In addition, the National Defense Authorization Act for Fiscal Year 2016 directs the Secretary of Defense to conduct a comprehensive review of DOD headquarters, among other things, for purposes of consolidating and streamlining headquarters functions. This functional review is to address the extent to which certain groupings of DOD headquarters organizations—such as OSD, the military departments, the defense agencies, and other organizations—have duplicative staff functions and services and could therefore be consolidated, eliminated, or otherwise streamlined. We have previously identified key steps to help analysts and policymakers conduct reviews to identify and evaluate instances of duplication, fragmentation, and overlap. One step in conducting such a review is to identify the potential positive and negative effects of any duplication, fragmentation, or overlap by assessing program implementation, outcomes and impact, and cost-effectiveness. In particular, we found that assessing and comparing the performance and cost-effectiveness of programs can help analysts determine which programs, or aspects of programs, to recommend for actions such as consolidation or elimination. In the absence of reliable data on the costs of functions with headquarters organizations, we obtained data from each of the military services’ manpower databases on all military and civilian headquarters positions for fiscal years 2010 through 2014. However, we could not reliably calculate the estimated costs to DOD of filling those positions due to inconsistencies and incomplete information in the pay grade data we collected from the Army and the Air Force. For example, the Army could not provide data to distinguish whether the 15 percent of its headquarters positions allocated to its reserve components in 2014 were full- or part- time—a factor needed to estimate costs. In the Air Force, we were unable to match civilian pay scales to 16 percent of Air Force civilian headquarters positions in 2014. For the Fourth Estate headquarters positions, DOD was unable to provide pay grade and location information from its Fourth Estate data system in time for our review due to other ongoing, headquarters-related initiatives. However, according to our analysis of data-reliability questionnaires sent to Fourth Estate organizations, 25 of the 38 Fourth Estate organizations, or 66 percent, reported that their data in the Fourth Estate data system for the period from fiscal year 2010 through fiscal year 2014 were incomplete or inaccurate. Once the definition of major DOD headquarters activities is published in DOD guidance, without reliable information on the costs associated with functions within headquarters organizations—through revisions to the IGCA Inventory or another method—the department will not be able to accurately estimate resources associated with specific headquarters functions, which in turn could help senior officials identify streamlining opportunities, make decisions, monitor performance, and allocate resources. As it faces a potentially extended period of fiscal constraints, DOD has concluded that reducing the resources it devotes to headquarters is an area where cost savings can be achieved. The defense committees agree, but have expressed concern about DOD’s ability to identify significant cost savings given the department’s poor visibility into the total resources being devoted across organizations to similar functions and missions. Further, Congress has recently directed the Secretary of Defense to ensure that the department achieves savings in the total funding available for major DOD headquarters activities by fiscal year 2020 that are not less than 25 percent of the baseline amount, and to implement a plan to ensure the department achieves not less than $10 billion in cost savings from its headquarters, administrative, and support activities by fiscal year 2019. Since 2014, DOD has undertaken concurrent initiatives of varying scope that include improving the efficiency of headquarters organizations and identifying related cost savings, but, because they are not yet completed, it is unclear to what extent these initiatives will help the department to achieve the $62 billion to $84 billion in cost savings opportunities that it has identified. DOD’s limited information on which positions perform which headquarters functions and their associated costs hinders its ability to identify potential cost savings associated with opportunities to consolidate and streamline these headquarters functions. While the department has taken steps to respond to a new headquarters definition and has begun to align its key resource database—the FYDP—to better reflect the new definition, these efforts are not yet completed and the department does not have plans to align these efforts with the existing data on department-wide military and civilian positions that have headquarters-related DOD function codes or to collect information on the costs associated with functions within headquarters organizations. Without such alignment and such information, the department will not be well-positioned to reliably conduct an assessment of its headquarters workforce by function to identify opportunities for streamlining and related cost savings. Conducting such functional analysis could allow DOD officials to raise questions about the number and types of positions with particular headquarters functions and to better understand cost drivers and identify efficiency-related opportunities within the department. To further DOD’s efforts to identify opportunities for more efficient use of headquarters-related resources, we recommend that the Secretary of Defense direct the Deputy Chief Management Officer, in coordination with the Under Secretary of Defense for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff, the Secretaries of the military departments, and the heads of the defense agencies and DOD field activities, to take the following two actions: align DOD’s data on department-wide military and civilian positions that have headquarters-related DOD function codes with the revised definition of major DOD headquarters activities in order to provide the department with reliable data to accurately assess headquarters functions and identify opportunities for streamlining or further analysis; and once this definition is published in DOD guidance, collect reliable information on the costs associated with functions within headquarters organizations—through revisions to the IGCA Inventory or another method—in order to provide the department with detailed information for use in estimating resources associated with specific headquarters functions, and in making decisions, monitoring performance, and allocating resources. We provided a draft of this report to DOD for review and comment. In written comments on a draft of this report, DOD concurred with our two recommendations. DOD’s comments are summarized below and reprinted in their entirety in appendix VI. DOD concurred with our recommendations to (1) align DOD’s data on department-wide military and civilian positions that have headquarters- related DOD function codes with the revised definition of major DOD headquarters activities, and (2) once this definition is published in DOD guidance, collect reliable information on the costs associated with functions within headquarters organizations—through revisions to the IGCA Inventory or another method. In its response, DOD stated that it is currently updating civilian and military manpower and total obligation authority baselines for major DOD headquarters activities to align with the new headquarters-related definition and framework. The department stated that this effort includes updating data architecture for coding major DOD headquarters activities, by program element code, in the Future Years Defense Program, and noted that this data architecture will serve as the authoritative methodology to account for headquarters manpower and resources in the future. Further, DOD stated that, once those efforts are complete and the new framework is codified in an update to DOD Instruction 5100.73, the department will determine how best to align the function code taxonomy, which is the source of data for the IGCA Inventory, with the revised framework and definitions. We agree that determining how to align the data set from the IGCA Inventory with the revised framework and definitions is an important first step and, if implemented, would address the intent of our first recommendation. Finally, DOD stated in its comments that the updated data architecture will enable the department to collect consistent, comprehensive, and authoritative information on the costs associated with major DOD headquarters activities. We also agree that updating the data architecture for coding major DOD headquarters activities will help improve the department’s visibility of headquarters-related resources. As the department works to complete this effort, we believe that it should develop a means of collecting reliable information on the costs associated with functions within headquarters organizations. Doing so would provide the department with detailed information for use in estimating resources associated with specific headquarters functions, and, if implemented, would address the intent of our second recommendation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Deputy Chief Management Officer, the Chairman of the Joint Chiefs of Staff, the Secretaries of the military departments, and the heads of the defense agencies and DOD field activities. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. We have issued several reports since 2012 on defense headquarters and on the Department of Defense’s (DOD) challenges in accounting for the resources devoted to headquarters. In March 2012, we found that while DOD has taken some steps to examine its headquarters resources for efficiencies, additional opportunities for savings may exist by further consolidating organizations and centralizing functions. We also found that DOD’s data on its headquarters personnel lacked the completeness and reliability necessary for use in making efficiency assessments and decisions. Recommendations: We recommended that the Secretary of Defense direct the Secretaries of the military departments and the heads of the DOD components to continue to examine opportunities to consolidate commands and to centralize administrative and command support services, functions, or programs. Additionally, we recommended that the Secretary of Defense revise DOD Instruction 5100.73, Major DOD Headquarters Activities, to include all headquarters organizations; specify how contractors performing headquarters functions will be identified and included in headquarters reporting; clarify how components are to compile the information needed for headquarters- reporting requirements; and establish time frames for implementing actions to improve tracking and reporting of headquarters resources. DOD concurred with the first recommendation and partially concurred with the second recommendation in this report. Status: DOD officials have stated that, since 2012, several efforts have been made to consolidate or eliminate commands and to centralize administrative and command support services, functions, or programs. For example, Office of the Secretary of Defense (OSD) officials said that DOD has begun efforts to assess which headquarters organizations are not currently included in its guiding instruction on headquarters, and will update the instruction. However, as of June 2016, DOD has not completed its update of the instruction to include all major headquarters activity organizations. OSD officials stated that they would begin updating this instruction upon completion of the effort to assess headquarters organizations. In addition, DOD has not specified how contractors will be identified and included in headquarters reporting, and has not identified a time frame for action. In May 2013, we found that authorized military and civilian positions at the geographic combatant commands—excluding U.S. Central Command— had increased by about 50 percent from fiscal year 2001 through fiscal year 2012, primarily due to the addition of new organizations, such as the establishment of U.S. Northern Command and U.S. Africa Command, and increased mission requirements for the theater special operations commands. We also found that DOD’s process for sizing its combatant commands had several weaknesses, including the absence of a comprehensive, periodic review of the existing size and structure of these commands and inconsistent use of personnel-management systems to identify and track assigned personnel. Recommendations: We recommended that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff to revise its guiding instruction on managing joint personnel requirements—Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program—to require a comprehensive and periodic evaluation of whether the size and structure of the combatant commands meet assigned missions. DOD did not concur with this recommendation, but we continue to believe that institutionalizing a periodic evaluation of all authorized positions would help to systematically align manpower with missions and add rigor to the requirements process. The department concurred with the remaining three recommendations, namely, that the Secretary of Defense: (1) direct the Chairman of the Joint Chiefs of Staff to revise Chairman of the Joint Chiefs of Staff Instruction 1001.01A to require the combatant commands to identify, manage, and track all personnel and to identify specific guidelines and time frames for the combatant commands to consistently input and review personnel data in the system; (2) direct the Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders and Secretaries of the military departments, to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands; and (3) direct the Under Secretary of Defense (Comptroller) to revise volume 2, chapter 1, of DOD’s Financial Management Regulation 7000.14R to require the military departments, in their annual budget documents for operation and maintenance, to identify the authorized military positions and civilian and contractor full-time equivalents at each combatant command and provide detailed information on funding required by each command for mission and headquarters-support, such as civilian pay, contract services, travel, and supplies. Status: With regard to the recommendation to revise the instruction to require the commands to improve visibility over all combatant command personnel, DOD has established a new manpower tracking system, the Fourth Estate Manpower Tracking System, that is to track all personnel data, including temporary personnel, and identify specific guidelines and timelines to input/review personnel data. With regard to the recommendation to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands, as of August 2015, the process outlined by DOD to gather information on authorized and assigned personnel at the service component commands is the same as the one identified during our prior work. With regard to the recommendation to revise DOD’s Financial Management Regulation, in December 2014 DOD indicated that the Office of the Under Secretary of Defense (Comptroller) had reinstituted an existing budgetary document, the President’s Budget 58, Combatant Command Direct Funding, and directed the military services to use this budget exhibit in its guidance on submission of the fiscal years 2016 through 2020 program and budget. The President’s Budget 58 provides the department’s justification and visibility for changes in the level of resources required for each combatant command. While the President’s Budget 58 does not provide detailed information on the number of authorized military or civilian positions and contractor full- time equivalents at each combatant command, it does identify the funding required by each combatant command for mission and headquarters support, which, in general, satisfies the intent of our recommendation. In June 2014, we found that DOD’s functional combatant commands have shown substantial increases in authorized positions and costs to support headquarters operations since fiscal year 2004, primarily to support recent and emerging missions, including military operations to combat terrorism and the emergence of cyberspace as a warfighting domain. Further, we found that DOD did not have a reliable way to determine the resources devoted to management headquarters as a starting point for DOD’s planned 20 percent reduction to headquarters budgets, and thus we concluded that actual savings would be difficult to track. We recommended that DOD reevaluate the decision to focus reductions on management headquarters to ensure meaningful savings and set a clearly defined and consistently applied baseline starting point for the reductions. Further, we recommended that DOD track the reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. Recommendations: We recommended that the Secretary of Defense reevaluate the decision to focus reductions on management headquarters to ensure the department’s efforts ultimately result in meaningful savings. DOD partially concurred, questioning, in part, the recommendation’s scope. We agreed that the recommendation has implications beyond the functional combatant commands, which was the scope of our review, but the issue we identified is not limited to these commands. We also recommended that the Secretary of Defense (1) set a clearly defined and consistently applied starting point as a baseline for reductions; and (2) track reductions against the baselines in order to provide reliable accounting of savings and reporting to Congress. DOD concurred with these two recommendations. Status: To address the two recommendations with which it concurred, DOD said that it planned to use the Future Years Defense Program data to set the baseline going forward. DOD stated that it was enhancing data elements within a DOD resource database to better identify management headquarters resources to facilitate tracking and reporting across the department. A December 2014 Resource Management Decision directed DOD components to identify and correct inconsistencies in major headquarters activities in authoritative DOD systems and reflect those changes in the fiscal year 2017 program objective memorandums or submit them into the manpower management system, but this effort has not yet been completed. In January 2015, we found that, over the previous decade, authorized military and civilian positions have increased within the DOD headquarters organizations we reviewed—OSD, the Joint Staff, and the Army, Navy, Marine Corps, and Air Force secretariats and staffs—but the size of these organizations has recently leveled off or begun to decline. In addition, we found that the DOD headquarters organizations we reviewed do not determine their personnel requirements as part of a systematic requirements-determination process, nor do they have procedures in place to ensure that they periodically reassess these requirements as outlined in DOD and other guidance. Current personnel levels for these headquarters organizations are traceable to statutory limits enacted in the 1980s and 1990s to force efficiencies and reduce duplication. However, we found that these limits have been waived since fiscal year 2002 and have little practical utility because of statutory exceptions for certain categories of personnel, and because the limits exclude personnel in supporting organizations that perform headquarters- related functions. Recommendations: We recommended that the Secretary of Defense direct the following three actions: (1) conduct a systematic determination of personnel requirements for OSD, the Joint Staff, and the military services’ secretariats and staff, which should include analysis of mission, functions, and tasks, and the minimum personnel needed to accomplish those missions, functions, and tasks; (2) submit these personnel requirements, including information on the number of personnel within OSD and the military services’ secretariats and staffs that count against the statutory limits, along with any applicable adjustments to the statutory limits, to Congress, along with any recommendations needed to modify the existing statutory limits; and (3) establish and implement procedures to conduct periodic reassessments of personnel requirements within OSD and the military services’ secretariats and staffs. DOD partially concurred with all of these recommendations. In addition, we raised a matter for congressional consideration, namely, that Congress should consider using the results of DOD’s review of headquarters personnel requirements to reexamine the statutory limits. Such an examination could consider whether supporting organizations that perform headquarters functions should be included in statutory limits and whether the statutes on personnel limitations within the military services’ secretariats and staffs should be amended to include a prohibition on reassigning headquarters-related functions elsewhere. Status: With regard to the recommendation that DOD conduct a systematic determination of personnel requirements for OSD, the Joint Staff, and the military services’ secretariats and staff, the department stated that it will continue to use the processes and prioritization that are part of the Planning, Programming, Budgeting, and Execution process, and will also investigate other methods for aligning personnel to missions and priorities. However, DOD did not specify whether any of these actions would include a workforce analysis. With regard to the recommendation related to conducting periodic reassessments of personnel requirements within OSD and the military service secretariats and staffs. DOD said that it supports the intent of the recommendation but that such periodic reassessments require additional resources and personnel, which would drive an increase in the number of personnel performing major DOD headquarters activities. Specifically, DOD stated it intends to examine the establishment of requirements determination processes across the department, to include the contractor workforce, but this will require a phased approach across a longer timeframe. In December 2014, the Secretary of Defense directed the Deputy Chief Management Officer to develop and implement a manpower requirements validation process for OSD, the defense agencies, and DOD field activities for military and civilian manpower, but, as of June 2016, this effort has not yet been completed. With regard to the recommendation related to the submission of the personnel requirements to Congress, along with any applicable adjustments and recommended modifications. DOD stated that it has ongoing efforts to refine and improve its reporting capabilities associated with these requirements, noting that the department has to update DOD Instruction 5100.73, Major DOD Headquarters Activities, before it can determine personnel requirements that count against the statutory limits. We previously recommended that the department update this instruction, and, according to DOD officials, they intend to begin updating the instruction in June 2016. In addition, we noted that DOD did not indicate whether the department would submit personnel requirements that count against the statutory limits in the Defense Manpower Requirements Report, as we recommend, once the instruction is finalized. We continue to believe that submitting these personnel requirements to Congress in this DOD report would provide Congress with key information to determine whether the existing statutory limits on military and civilian personnel are effective in limiting headquarters personnel growth. With regard to the matter for congressional consideration, the Senate Armed Services Committee markup of the National Defense Authorization Act for Fiscal Year 2017 includes a provision that would allow the OSD and the military departments to increase their number of military and civilian personnel by 15 percent in times of national emergency. In the Inherently Governmental / Commercial Activities (IGCA) Inventory for fiscal years 2010 through 2014, there are 38 functions, each designated by a specific DOD function code, that have a headquarters designation; of these, 35 are labeled “Management Headquarters,” while 3 are labeled “Combatant Headquarters.” For the purposes of this report, we use the term “headquarters,” rather than “management headquarters” or “combatant headquarters,” when referring to the titles of these 38 functions in the body of the report. Table 1 lists the 38 headquarters functions with accompanying descriptions. House Report 113-446 and Senate Report 113-176 included provisions that we, among other things, identify the Department of Defense’s (DOD) headquarters reduction efforts to date and any trends in personnel and other resources being devoted to selected functional areas within and across related organizations. This report (1) describes the status of DOD’s initiatives since 2014 to improve the efficiency of headquarters organizations and identify related cost savings; and (2) assesses the extent to which DOD has reliable data to assess headquarters functions and their associated costs. To describe the status of DOD’s initiatives to improve the efficiency of headquarters organizations and identify related cost savings, we identified and reviewed DOD headquarters-related efficiency efforts begun since 2014. We obtained documentary and testimonial evidence from senior officials in the Office of the Deputy Chief Management Officer to determine the scope and status of these headquarters-related efficiency efforts and what actions, if any, DOD has taken as a result of the efforts. To assess the extent to which DOD has reliable data to assess headquarters functions and their associated costs, we took two main steps. First, we identified and reviewed DOD-wide sources of information that would provide data on the department’s workforce in terms of whether the workforce is performing headquarters work and the specific headquarters functions that workforce is performing. We discussed with officials from the Office of the Deputy Chief Management Officer, the Office of the Under Secretary of Defense for Personnel and Readiness, and the Cost Analysis and Program Evaluation office, and reviewed data from, several department-wide sources, specifically, the Future Years Defense Program, the Defense Manpower Data Center, the Inherently Governmental / Commercial Activities (IGCA) Inventory, and the Inventory of Contracted Services. For this report, we analyzed data and information related to the IGCA Inventory because it was the only DOD-wide data set identified that allowed us to determine the military and civilian workforce—in the form of authorized positions—by both headquarters and function. However, we found that the data in the IGCA Inventory were submitted by the various DOD organizations at different points in a given fiscal year. To ensure that the data would be as close to the end of each fiscal year as possible, we obtained data from each of the military services’ manpower databases used to populate DOD’s IGCA Inventory for fiscal years 2010 through 2014, which was the most recent 5-year period available during our review. For each service database, we identified the subset of military and civilian positions considered headquarters according to IGCA guidance. We then analyzed these headquarters positions for each organization and for each fiscal year by number of military and civilian positions, function, grade, and location. We discussed the data, and the reasons for any patterns or changes we observed in them, with military service representatives. DOD was unable to provide similar data for organizations outside the military departments known as the Fourth Estate in time for our review, so we collected data on the Fourth Estate’s military and civilian positions directly from the IGCA Inventory for fiscal years 2012 to 2014. We assessed the data we received against federal standards for internal control, which call for an agency to have relevant, reliable, and timely information in order to run and control its operations. Second, we attempted to calculate the approximate costs of the headquarters positions and associated functions. Because the IGCA Inventory does not contain estimated costs for positions, we used the service databases’ pay grade and location information assigned to the military and civilian positions in an attempt to determine the estimated cost to DOD of filling headquarters positions. Specifically, we applied DOD’s military composite standard pay rates and civilian fringe benefits rates to the pay grades we had collected for each position identified in the service databases. We were unable to do a similar calculation for the Fourth Estate because Fourth Estate data on positions came from the IGCA Inventory, which does not contain pay grade and location information needed for a cost calculation. We assessed our and DOD’s efforts to calculate headquarters-related costs against federal standards for internal control on having relevant, reliable, and timely information, and noted the importance of a key step that we have previously identified for conducting fragmentation, overlap, and duplication reviews. Specifically, one of the steps in conducting such a review is to identify the positive and negative effects of any fragmentation, overlap, and duplication by assessing program implementation, outcomes and impact, and cost- effectiveness. We assessed the reliability of the IGCA-related data sets by reviewing responses to data questionnaires sent to knowledgeable service and Fourth Estate officials, discussing the data with these officials, and conducting our own cross-checks of the data to assess their reasonableness. We found the data to be insufficient for identifying trends in the number and type of headquarters positions and for estimating costs associated with headquarters positions. However, we found the data to be sufficiently reliable for presenting 1 year’s worth of data for purposes of illustrating the types of analyses of department-wide headquarters functions that could be conducted if DOD improved the reliability of these data. Finally, we were unable to obtain data on contracted services personnel, either their positions or associated costs, because DOD does not identify contracted services personnel by the type of headquarters function they perform. We interviewed officials or, where appropriate, obtained documentation from the organizations listed below. Office of the Secretary of Defense Office of the Deputy Chief Management Officer Office of Cost Assessment and Program Evaluation Office of the Under Secretary of Defense (Comptroller) Office of the Under Secretary of Defense for Personnel and Readiness Office of the Under Secretary of Defense for Acquisition, J1, Manpower and Personnel Directorate Department of the Army Office of the Assistant Secretary of the Army for Manpower and G1, Office of the Deputy Chief of Staff for Personnel G3/5/7, Operations and Plans Department of the Navy Office of the Assistant Secretary of the Navy for Manpower N1, Office of the Deputy Chief of Naval Operations, Manpower Deputy Commandant for Combat Development and Integration, Department of the Air Force A1, Office of the Deputy Chief of Staff for Personnel U.S. Special Operations Command Defense Agencies / DOD Field Activities Defense Acquisition University Defense Advanced Research Projects Agency Defense Contract Audit Agency Defense Contract Management Agency Defense Finance and Accounting Service Defense Human Resource Activity Defense Information Systems Agency Defense Legal Services Agency Defense POW/MIA Accounting Agency Defense Security Cooperation Agency Defense Technical Information Center Defense Technology Security Administration Defense Threat Reduction Agency Department of Defense Education Activity Department of Defense Inspector General Office of Economic Adjustment Pentagon Force Protection Agency Test Resource Management Center We conducted this performance audit from January 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Department of Defense (DOD) officials identified the following efforts initiated between 2010 and 2014 to realize cost savings related to headquarters. In a May 2010 speech, the Secretary of Defense expressed concerns about the dramatic growth in DOD’s headquarters and support organizations that had occurred since 2001, including increases in spending, staff, and numbers of senior executives and the proliferation of management layers. The Secretary of Defense then directed that DOD was to undertake a department-wide initiative to assess how the department is staffed, organized, and operated, with the goal of reducing excess overhead costs and reinvesting these savings toward sustainment of DOD’s current force structure and modernizing its weapons portfolio. In March 2012, DOD identified additional efficiency initiatives, referred to as More Disciplined Use of Resources initiatives, in its fiscal year 2013 budget request. DOD identified additional More Disciplined Use of Resources initiatives for the fiscal year 2014 budget in April 2013. According to information accompanying its fiscal years 2013 and 2014 budget requests, DOD identified these initiatives by conducting a review of bureaucratic structures, business practices, modernization programs, civilian and military personnel levels, and associated overhead costs. In March 2013, the Secretary of Defense directed the completion of a Strategic Choices Management Review to examine the potential effect of additional, anticipated budget reductions on the department and to develop options for performing DOD missions. According to the Secretary, a tenet of the review was the need to maximize savings from reducing DOD’s overhead, administrative costs, and other institutional expenses. In July 2013, the Secretary of Defense set a target for reducing DOD components’ total management headquarters budgets by 20 percent for fiscal years 2014 through 2019, including costs for civilian personnel and contracted services, while striving for a goal of 20 percent reductions to authorized military and civilian personnel. This effort was designed to streamline DOD’s management of its headquarters through efficiencies and elimination of spending on lower-priority activities. In August 2013, the Secretary and Deputy Secretary of Defense directed an organizational review of the Office of the Secretary of Defense, consistent with the Strategic Choices and Management Review, that was intended to assess and recommend specific adjustments to OSD’s organizational structure. The review resulted in several organizational alignments, such as realigning another office to the Office of the Deputy Chief Management Officer structure, and contributed to the 20 percent headquarters reductions that were captured in DOD’s fiscal year 2015 budget request. This appendix provides our analysis showing the five headquarters functions, in each military service and Fourth Estate component, with the highest number of headquarters positions for fiscal year 2014. Based on our review of the data and discussions with service officials, fiscal year 2014 data is the most reliable data available during the period of our review. The military services are the Army; the Navy; the Marine Corps; and the Air Force. To help meet their respective missions, each military service has both operational and nonoperational headquarters organizations. See table 2 for the percentage of the military services’ headquarters positions by headquarters function for fiscal year 2014. The Fourth Estate is made up of the Department of Defense (DOD) organizations that are separate from the military services. Our review focused on four organizational components that make up the Fourth Estate: (1) the Office of the Secretary of Defense; (2) the Joint Staff, including the North Atlantic Treaty Organization; (3) the combatant commands; and (4) defense agencies and DOD field activities. See table 3 for the percentage of Fourth Estate headquarters positions by headquarters function for fiscal year 2014. In addition to the contact named above, Margaret A. Best (Assistant Director), Tracy Barnes, Timothy Carr, Gabrielle A. Carrington, Cynthia Grant, Mae Jones, Bethann E. Ritter Snyder, Benjamin Sclafani, Michael Silver, Amie Lesser, and Melissa Wohlgemuth made key contributions to this report. | Facing budget pressures, DOD is seeking to reduce its headquarters activities by identifying streamlining opportunities. DOD has multiple layers of headquarters activities with complex, overlapping relationships, such as OSD, the Joint Staff, the military service secretariats and staffs, and defense agencies. Committee reports accompanying bills for the National Defense Authorization Act for Fiscal Year 2015 included provisions for GAO to identify DOD's headquarters reduction efforts to date and patterns in functional areas related to DOD's headquarters activities. This report (1) describes the status of DOD's initiatives since 2014 to improve the efficiency of headquarters organizations and identify related cost savings, and (2) assesses the extent to which DOD has reliable data to assess headquarters functions and their associated costs. GAO assessed DOD-wide headquarters-related efficiency efforts, and a DOD-wide data set that identifies positions with headquarters functions. Since 2014, and in part to respond to congressional direction, the Department of Defense (DOD) has undertaken initiatives intended to improve the efficiency of headquarters organizations and identify related cost savings, but it is unclear to what extent these initiatives will help the department achieve the potential savings it has identified. In a 2015 review of its six business processes, DOD identified $62 billion to $84 billion in potential cumulative savings opportunities for fiscal years 2016 through 2020. According to DOD officials, the department is currently pursuing four headquarters-related initiatives, but these were not completed, or results were not available, in time for GAO to assess their effect. The table below provides a description of these initiatives. Source: GAO analysis of DOD information. GAO-16-286 DOD has taken steps to improve some available data on headquarters organizations, but does not have reliable data for assessing headquarters functions and associated costs. Consistent with a GAO recommendation, DOD has established a framework for major DOD headquarters activities, is working to identify which organizations or portions of organizations meet a new definition of major DOD headquarters activities, and plans to update a key database to improve visibility of headquarters resources. However, the one department-wide data set that identifies military and civilian positions by specific DOD headquarters functions contains unreliable data because DOD has not aligned these data with its revised headquarters definition. Further, DOD does not have plans to collect information on costs associated with functions within headquarters organizations. This may hinder DOD's ability to conduct an in-depth review for purposes of consolidating and streamlining headquarters functions. Without alignment of headquarters function data with the revised headquarters definition and collection of reliable information on costs associated with headquarters functions, DOD may be unable to accurately assess specific functional areas or identify potential streamlining and cost savings opportunities. To further DOD's efforts to identify headquarters-related efficiency opportunities, GAO recommends that DOD align its data on positions that have headquarters-related DOD function codes with the revised definition of major DOD headquarters activities and collect information on costs associated with functions within headquarters organizations. DOD concurred with the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Managed Accounts in Other Workplace Defined Contribution Plans and Individual Retirement Accounts (IRAs) As managed accounts have gained popularity in 401(k) plans, there are indications that they may also be gaining popularity in government and non-profit workplace retirement savings plans, commonly referred to as 457 or 403(b) plans. Many of the providers we spoke to that offer managed accounts to 401(k) plans also offer services to other plans like these. In addition, some providers are starting to offer managed accounts in IRAs, and in particular rollover IRAs—when participants separate from their employer they may decide to roll their funds into an IRA. One of these providers noted that it is easier to engage participants who use managed accounts through products such as IRAs, and there is more flexibility with investment options, even though the provider’s marketing costs may be higher. Under Title I of the Employee Retirement Income Security Act of 1974 (ERISA), as amended, employers are permitted to sponsor defined contribution plans in which an employee’s retirement savings are based on contributions and the performance of the investments in individual accounts. Typically, 401(k) plans—the predominant type of defined contribution plan in the United States—allow employees who participate in the plan to specify the size of their contributions and direct their assets to one or more investments among the options offered within the plan. Investment options generally include mutual funds, stable value funds, company stock, and money market funds. To help participants make optimal investment choices, an increasing number of plans are offering professionally managed allocations—including managed accounts—in their 401(k) plan lineups. Managed accounts are investment services under which providers make investment decisions for specific participants to allocate their retirement savings among a mix of assets they have determined to be appropriate for the participant based on their personal information. As shown in figure 1, managed accounts were first offered to 401(k) plans around 1990, but most providers did not start offering them until after 2000. Managed accounts differ from other professionally managed allocations, such as target date funds and balanced funds, in several key ways. Target date funds (also known as life cycle funds) are products that determine an asset allocation that would be appropriate for a participant of a certain age or retirement date and adjust that allocation so it becomes more conservative as the fund approaches its intended target date. Target date funds do not place participants into an asset allocation; instead, participants generally self-select into a target date fund they feel is appropriate for them based on the fund’s predetermined glide path that governs asset allocation. Balanced funds are products that generally invest in a fixed mix of assets (e.g., 60 percent equity and 40 percent fixed income assets). While target date funds manage the fund to reach a target date, managed accounts may consider other, more personalized factors such as a participant’s stated risk tolerance, even though they are not required to do so. As shown in figure 2, managed accounts may offer higher levels of personalization than other types of professionally managed allocations. Managed accounts are generally considered to be an investment service—not one of the plan’s investment options—while target date funds are considered to be investment options. In the latter, participants can invest all or a portion of their 401(k) plan contributions in a target date fund, but generally cannot directly invest in a managed account. Instead, the role of the participant is to enroll in the managed account service, or be defaulted into it, generally relinquishing their ability to make investment decisions unless they disenroll from, or opt out of, the managed account. As shown in figure 3, managed account providers decide how to invest contributions, generally among the investment options available in a 401(k) plan, and then manage these investments over time to help participants reach their retirement savings goals. By comparison, participants not enrolled in a managed account have to make their own decisions about how to invest their 401(k) plan contributions. DOL’s Employee Benefits Security Administration (EBSA) is the primary agency through which Title I of ERISA is enforced to protect private pension plan participants and beneficiaries from the misuse or theft of their pension assets. To carry out its responsibilities, EBSA issues regulations and guidance; investigates plan sponsors, fiduciaries, and service providers; seeks appropriate remedies to correct violations of the law; and pursues litigation when it deems necessary. As part of its mission, DOL is also responsible for assisting and educating plan sponsors to help ensure the retirement security of workers and their families. In 2007, DOL designated certain managed accounts as one type of investment that may be eligible as a qualified default investment alternative (QDIA) into which 401(k) plan fiduciaries may default participants who do not provide investment directions with respect to their plan contributions. DOL designated three categories of investments that may be eligible as QDIAs if all requirements of the QDIA regulation have been satisfied—these categories generally include: (1) an investment product or model portfolio that is designed to become more conservative as the participant’s age increases (e.g., a target date or lifecycle fund); (2) an investment product or model portfolio that is designed with a mix of equity and fixed income exposures appropriate for the participants of the plan as a whole (e.g., a balanced fund); and (3) an investment management service that uses investment alternatives available in the plan and is designed to become more conservative as the participant’s age increases (e.g., a managed account). DOL regulations indicate that plan fiduciaries who comply with the QDIA regulation will not be liable for any loss to participants that occurs as a result of the investment of their assets in a QDIA, including investments made through managed account arrangements that satisfy the conditions of the QDIA regulation. However, plan fiduciaries remain responsible for the prudent selection and monitoring of any QDIA offered by the plan. To obtain relief, plan fiduciaries must provide participants with advance notice of the circumstances under which plan contributions or other assets will be invested on their behalf in a QDIA; a description of the QDIA’s investment objectives, risk and return characteristics, and fees and expenses; and the right of participants to opt out of the QDIA, among other things. A 2012 survey of defined contribution plan sponsors by PLANSPONSOR indicated that managed accounts were used as a QDIA less than 5 percent of the time. Managed accounts are also offered as opt-in services by over 30 percent of defined contribution plan sponsors. Managed accounts can be offered as both QDIA and opt-in services, allowing the plan sponsor to choose which services to offer their participants. Plan fiduciaries who offer managed account services only to participants who affirmatively elect to use the service (i.e., on an opt-in basis), rather than by default, are not required to comply with the QDIA regulation, although such fiduciaries still are subject to the general fiduciary obligations under ERISA with respect to the selection and monitoring of a managed account service for their plan. Plan sponsors, including those who offer managed account services in their 401(k) plans, are required to issue a variety of informational disclosures and notices to plan participants and beneficiaries at enrollment, on a quarterly and annual basis, and when certain triggering events occur. These disclosures—often referred to as participant-level disclosures—when made in accordance with regulatory requirements, help ensure that plan participants have access to the information they need to make informed decisions about their retirement investments. In addition, when a plan sponsor chooses to default participants into managed accounts as a QDIA, the sponsor must inform participants of this decision annually through a number of specific disclosures, based on the plan’s design. The QDIA disclosures, when made in accordance with regulatory requirements, provide relief from certain fiduciary responsibilities for sponsors of 401(k) plans. Service providers that provide managed account services to a plan may be required to provide certain disclosures about the compensation they will receive to plan sponsors offering a managed account service under different DOL disclosure requirements. These disclosures—often referred to as service provider disclosures—are intended to provide information sufficient for sponsors to make informed decisions when selecting and monitoring service providers for their plans. DOL’s final rule on these disclosures requires service providers to furnish sponsors with information to help them assess the reasonableness of total compensation paid to providers, to identify potential conflicts of interest, and to satisfy other reporting and disclosure requirements under Title I of ERISA, including the regulation governing sponsor’s disclosure to participants. Managed account provider roles may differ from those of other plan service providers. As shown in figure 4, when a plan sponsor decides to offer participants a managed account service, other entities may contribute to its implementation and operation. Some record keepers and intermediary service providers refer to themselves as “managed account providers” because they make this service available to participants, but they do not ultimately decide how to invest participant contributions. Similarly, even though target date fund managers or collective investment trust managers may select an overall asset allocation strategy and investments to fit that strategy for the funds they offer to 401(k) plan participants, these managers also do not ultimately decide how to invest participant accounts. Plan sponsors are typically the named fiduciaries of the plan. Managed account providers and record keepers may also be fiduciaries, depending on their roles and the services they provide. Fiduciaries are required to carry out their responsibilities prudently and solely in the interest of the plan’s participants and beneficiaries. Plan service providers that have investment discretion or provide investment advice about how to invest participant accounts generally may be “3(38) Investment Manager” fiduciaries or “3(21) Investment Adviser” fiduciaries. A 3(38) Investment Manager fiduciary can only be a bank, an insurance company, or a Registered Investment Adviser (RIA). Under ERISA, 3(38) Investment Manager fiduciaries have the power to manage, acquire, or dispose of plan assets, and they acknowledge, in writing, that they are a fiduciary with respect to the plan. In contrast, a 3(21) Investment Adviser fiduciary usually does not have authority to manage, acquire, or dispose of plan assets, but is still a fiduciary because its investment recommendations may exercise some level of influence and control over the investments made by the plan. When managed account services are offered as QDIAs, the managed account provider is generally required to be a 3(38) Investment Manager fiduciary. There is no similar explicit requirement for managed account providers whose services are offered within a plan on an opt-in basis. Managed account providers vary how they provide services, even though they generally offer the same basic service—initial and ongoing investment management of a 401(k) plan participant’s account based on generally accepted industry methods. The eight providers in our case studies use different investment options, employ varying strategies to develop and adjust asset allocations for participants, incorporate varying types and amounts of participant information, and rebalance participant accounts at different intervals. As a result, participants with similar characteristics in different plans may have differing experiences. To develop participant asset allocations, most of the eight providers in our case studies use the investment options chosen by the plan sponsor. By contrast, other providers require plan sponsors that want to offer their managed account to accept a preselected list of investment options from which the provider will determine participant asset allocations, including exchange traded funds or asset classes not typically found in 401(k) plan lineups, such as commodities. Because they are atypical investment options, participants who do not sign up for managed accounts may not be able to access them. Compared to typical 401(k) plan investment options, these atypical investment options may provide broader exposure to certain markets and opportunities to diversify participant retirement assets. The eight managed account providers in our case studies generally reported making asset allocation decisions based on modern portfolio theory, which sets a goal of taking enough risk so that participants’ 401(k) account balances may earn large enough returns over time to meet their retirement savings goals, but not so much that their balances could earn lower or even negative returns. Managed account providers generally help participants by constructing portfolios that attempt to provide maximum expected returns with a given level of risk, but their strategies can range from formal to informal. The formal way of determining this type of portfolio is called “mean-variance optimization” (MVO), under which providers plot risk and return characteristics of all combinations of investment options in the plan and choose the portfolio that maximizes expected return for a given level of risk. There are a number of specific techniques that managed account providers can apply to improve the quality and sophistication of asset allocations, including Monte Carlo simulation. However, some providers incorporated less formal ways of achieving a diversified portfolio, such as active management and experience-based methods. The eight providers in our case studies use varying strategies and participant goals to develop and adjust asset allocations for participants, as shown in table 1. As a result, participants with similar characteristics may end up with different asset allocations. Providers’ use of different asset allocation strategies leads to variation in the asset allocations participants actually experience. As shown in figure 5, four of the eight providers in our case studies vary in their recommendations of specific investment options for a 30-year old participant. The type and amount of information providers use can also affect the way participant account balances are allocated. For example, two of the eight providers in our case studies only offer a customized service—allocating a participant’s account based solely on age or other factors that can be easily obtained from the plan’s record keeper, such as gender, income, current account balance, and current savings rate. The other six providers also offer a personalized service that takes into account additional personal information to inform asset allocations, such as risk tolerance or spousal assets. Providers that offer a personalized service reported that personalization could lead to better asset allocation for participants, but they also reported that generally fewer than one-third, and sometimes fewer than 15 percent, of participants furnish this personalized information. As a result, some industry representatives felt that participants may not be getting the full value of the service for which they are paying. For example, participants who are defaulted into managed accounts that offer a highly personalized service run the risk of paying for services they are not using if they are disengaged from their retirement investments. As shown in table 2, we found that among five of the seven providers that furnished asset allocations for our hypothetical scenarios, there was little relationship between the level of personalization and the fee they charged to participants for the managed account service. Some managed account providers’ services may become more beneficial as participants age or as their situations become more complex because personalization seeks to create a tailored asset allocation for each participant. Such an individualized approach could even mean that older participants who are close to retirement and very young participants just starting their careers could be placed in equally risky allocations based on their personalized circumstances. However, industry representatives told us that participants who never supply additional, personalized information to managed account providers may be allocated similarly over time to those participants in target date funds. Providers differ in their approaches and time frames for rebalancing participant managed accounts—adjusting participant accounts to reflect any changes to their asset allocation strategies based on changing market conditions and participant information. Seven of the eight providers in our case studies use a “glide path” approach to systematically reduce participant risk over time but one does not set predetermined glide paths for participants. Similarly, two of the eight providers in our case studies rebalance participant accounts annually, while the other providers generally review and rebalance participant accounts at least quarterly. Despite these differences in approaches and timeframes, our analysis of provider hypothetical asset allocations indicated that providers generally allocated less to equity assets and more to fixed income or cash-like assets for the older hypothetical participants than for the younger hypothetical participant. Some managed account providers in our case studies offer their services under “direct” arrangements in which the plan sponsor directly contracts with a provider to offer these services, as shown in figure 6. According to the providers we spoke with, managed account providers in this type of arrangement are generally fiduciaries, but record keepers may not be fiduciaries with respect to the managed account service, as their role consists primarily of providing information to the managed account provider and implementing asset allocation changes to participant accounts. By contrast, some managed account providers use “subadvised” arrangements to offer their services. According to the providers we spoke to, in these arrangements, the plan sponsor does not directly contract with the managed account provider, and the plan’s record keeper, or an affiliate, may take on some fiduciary responsibility with respect to the managed account, as shown in figure 7. The record keeper may fulfill some of the responsibilities the managed account provider would have in a direct arrangement. These responsibilities may include providing periodic rebalancing based on the provider’s strategy, marketing managed account services, or offering other ongoing support for participants. All of the eight managed account providers in our case studies told us that they take on some level of fiduciary responsibility—regardless of whether their services are offered as QDIAs or on an opt-in basis—so they each offer some protections to sponsors and participants in managed accounts. Seven of the providers in our case studies told us that they willingly accept 3(38) Investment Manager fiduciary status for discretionary management over participant accounts, but one of the eight providers in our case studies noted that it never accepts 3(38) Investment Manager fiduciary status because it only has discretion over participants’ accounts once a year. This provider told us that it is only a 3(21) Investment Adviser fiduciary even though its managed account service is similar to that of the other providers in our case studies. Under ERISA, 3(21) Investment Adviser fiduciaries usually do not have authority over plan assets, but they may influence the operation of the plan by providing advice to sponsors and participants for a fee. As such, they are generally liable for the consequences when their advice is imprudent or disloyal. In contrast, a 3(38) Investment Manager fiduciary has authority to manage plan assets at their discretion and with prudent judgment, and is also liable for the consequences of imprudent or disloyal decisions. Because 3(38) Investment Manager fiduciaries have explicit discretionary authority and must have the qualifications of a bank, insurance company, or RIA, sponsors who use 3(38) Investment Manager fiduciaries may receive a broader level of liability protection from those providers as opposed to providers who offer managed account services as 3(21) Investment Adviser fiduciaries. In addition, when a 3(38) Investment Manager fiduciary is used, participants may have a broader level of assurance that they are receiving services from a qualified manager in light of the requirements related to qualifications of such fiduciaries. As noted previously, when managed account services are offered as QDIAs, DOL requires the managed account provider to generally be a 3(38) Investment Manager fiduciary, but DOL has no similar explicit requirement for managed account providers whose services are offered on an opt-in basis. Absent explicit requirements or additional guidance from DOL, some managed account providers may choose to structure the services they provide to limit their fiduciary liability, which could ultimately provide less liability protection for sponsors for the consequences of provider investment management choices. Given the current lack of direction or guidance about appropriate fiduciary roles for providers that offer managed accounts on an opt-in basis, sponsors may not be aware of this potential concern. Industry representatives we spoke with expressed concern about managed account providers who do not accept full responsibility with respect to managed account services by acknowledging their role as a 3(38) Investment Manager fiduciary. Other representatives also noted that it was important for sponsors to understand providers’ fiduciary responsibilities given the important differences between 3(21) Investment Adviser and 3(38) Investment Manager fiduciaries with respect to the nature of liability protection they may provide for sponsors and the services they may provide for both sponsors and participants. Managed account providers may offer potentially valuable additional services to participants in or near retirement regarding how to spend down their accumulated retirement savings, but these services could lead to potential conflicts of interest. Most of the providers in our case studies allow participants to continue receiving account management services when they retire as long as they leave all or a portion of their retirement savings in the 401(k) plan. Some of those providers also provide potentially useful additional services to participants in or near retirement and do not typically charge additional fees for doing so. These services may include helping participants review the tax consequences of withdrawals from their 401(k) account and advising them about when and how to claim Social Security retirement benefits. However, these providers may have a financial disincentive to recommend an out-of-plan option, such as an annuity or rollover to other plans or IRAs, because it is advantageous for them to have participants’ continued enrollment in their managed account service offered through a 401(k) plan. Providers have developed ways to mitigate some of this potential conflict of interest by, for example, offering advice on alternate sources of income in retirement such as TIPS. Regardless, representatives from a participant advocacy group noted that managed account providers should have little involvement in a participant’s decision about whether to stay in the managed account. As part of its responsibilities to protect plan participants under ERISA, DOL has not specifically addressed whether conflicts of interest may exist with respect to managed accounts offering additional services to participants in or near retirement. As a result, participants can be easily persuaded to stay in the managed account given the additional services offered to them by managed account providers. Additionally, the ease that these services offer could discourage managed account participants from fully considering other options, which can ultimately put them at risk of making suboptimal spend-down decisions. Some managed account providers and plan sponsors have said that increased diversification of retirement portfolios is the main advantage of the managed account service for 401(k) plan participants. Increased diversification for participants enrolled in a managed account can result in better risk management and increased retirement income compared to those who self-direct their 401(k) investments. For example, one provider’s study of managed account performance found that the portfolios of all managed account participants were believed to have been appropriately allocated, but that 43 percent of those who self-directed their 401(k) investments had equity allocations that were believed to be inappropriate for their age, and nearly half of these participants’ portfolios were improperly diversified. The advantages of a diversified portfolio include reducing a participant’s risk of loss, reducing volatility within the participant’s account, and generating long-term positive retirement outcomes. Another reported advantage of managed accounts is that they help to moderate volatility in 401(k) account performance, compared to accounts of those who self-direct their 401(k) investments. For example, in two recent reports on managed account performance, one record keeper concluded that the expanded use of professionally managed allocations, including managed accounts, is contributing to a reduction in extreme risk and return outcomes for participants, and is also gradually mitigating concerns about the quality of portfolio decision-making within defined contribution plans. Managed account providers in our eight case studies also claim that the increased personalization and more frequent rebalancing of managed accounts create an appropriately diversified portfolio that better meets a participant’s retirement goals than target date funds or balanced funds. According to these providers, periodic rebalancing combats participant inertia, one of the main problems with a self-directed 401(k) account, and the failure to update investment strategies when financial circumstances change over time. Several managed account providers told us that another advantage of managed accounts is the tendency for participants to save more for retirement compared to those who are not enrolled in the service. For example, in a study of managed accounts, a provider reported that participants in plans for which this provider offers the service contributed $2,070 more on average in 2012 than participants who self-directed investments in their 401(k) accounts (1.9 percent of salary more in contributions on average than participants who self-direct 401(k) investments). This provider noted that managed account participants are better at taking advantage of their plan’s matching contribution than participants who self-direct their 401(k) investments. For example, they found that 69 percent of managed account participants contributed at least to the level of the maximum employer matching contribution, while only 62 percent of participants who self-directed investments contributed to this level. This provider said that communication with managed account participants can lead to increased savings rates when they are encouraged to increase savings rates by at least 2 percentage points and to save at least to the point where they receive the full employer match, if such a match exists. Another service provider told us that it offers an online calculator that managed account participants can use to understand their retirement readiness. The provider also said that participants who use the calculator can see how increased savings can lead to improved retirement outcomes and will often increase their savings rate into their managed account. Retirement readiness statements received by participants who are enrolled in a managed account are another reported advantage of the service. Participants generally receive retirement readiness statements that can help them assess whether they are on track to reach their retirement goals, and the statements generally contain information about their retirement investments, savings rate, asset allocations, and projected retirement income. These statements help participants understand the likelihood of reaching their retirement goals given their current investment strategy and whether they should consider increasing their savings rates or changing risk tolerances for their investments. In some cases, these statements may provide participants with their first look in one document at the overall progress they are making toward their retirement goals. As shown in table 3, our review of three providers’ statements shows that they use different metrics on participant readiness statements to evaluate participants’ retirement prospects. For example, each statement provided participants with information on their retirement goals and risk tolerance, and a projection of their future retirement income to demonstrate the value of the service. Similar advantages, however, can be achieved through other retirement investment vehicles outside of a managed account and without paying the additional managed account fee. For example, in one recent study, a record keeper that offers managed accounts through its platform showed that there are other ways to diversify using professionally managed allocations, such as target date funds, which can be less costly. Although managed account providers may encourage participants to save more and review their progress towards achieving a secure retirement, participants still have to pay attention to these features of the managed account for it to provide value. Even if 401(k) plan participants are not in managed accounts, we found that in some instances they can still receive advice and education from a provider in the form of retirement readiness statements. The additional fee a participant generally pays for a managed account was the primary disadvantage mentioned by many industry representatives, plan sponsors, and participant advocates. Because of these additional fees, 401(k) plan participants who do not receive higher investment returns from the managed account services risk losing money over time. Some managed account providers and record keepers have reported that managed account participants earn higher returns than participants who self-direct their 401(k) plan investments, which may help participants offset the additional fee charged. For example, one provider told us that participants enrolled in managed accounts saw about 1.82 percentage points better performance per year, net of fees, compared to participants without managed accounts. Given these higher returns, this provider projects that a 25-year-old enrolled in its managed account could potentially see up to 35 percent more income at retirement than a participant not enrolled in the service, according to this provider’s calculations. Another provider reported that the portfolios of participants who were defaulted into managed accounts were projected to receive returns of nearly 1 percentage point more annually, net of fees, after the provider made allocation changes to the participants’ portfolios. However, the higher rates of return projected by managed account providers may not always be achievable. For instance, we found limited data from one record keeper that published returns for managed account participants that were generally less than or equal to the returns of other professionally managed allocations (a single target date fund or balanced fund) as shown in figure 8. We used these and other returns data published by this record keeper to illustrate the potential effect over 20 years of different rates of return on participant account balances. On the lower end, this record keeper reported that, over a recent 5-year period, 25 percent of its participants earned annualized returns of -0.1 percent or less, not even making up the cost of the additional fee for the service. On the higher end, the record keeper reported that, over a slightly different 5-year period, 25 percent of its participants earned annualized returns of 2.4 percent or higher for the service. These actual returns illustrate the substantial degree to which returns can vary. If such a 2.5 percentage point difference (between these higher and lower reported managed account rates of return of 2.4 percent and -0.1 percent, respectively) were to persist over 20 years, a participant earning the higher managed account rate of return could have nearly 26 percent more in their ending account balance at the end of 20 years than a participant earning the lower rate of return in their managed account. As shown in Figure 9, using these actual rates of return experienced by participants in managed accounts, such a variation in rates of return can substantially affect participant account balances over 20 years. Further, this record keeper’s published data on managed account rates of return were net of fees—rates of return would be higher if participants did not pay the additional fee for the service. For example, using this record keeper’s average fee rate in our analysis, we estimate that a hypothetical managed account participant who earned a higher rate of return of 2.4 percent will pay $8,400 more in additional fees over 20 years than a participant who self-directs investments in their 401(k) account and does not pay the additional fee. To illustrate the potential effect that fees could have on a hypothetical participant’s account balance over 20 years, we used a higher fee of 1 percent reported to us by one provider to estimate that a participant would pay $14,000 in additional fees compared to a participant who self-directs investments in their 401(k) account over the same period. However, based on the reported performance data we found, there is no guarantee that participants will earn a higher rate of return with a managed account compared to the returns for other professionally managed allocations or self-directed 401(k) accounts. The limited performance data we reviewed show that in most cases, managed accounts underperformed these other professionally managed allocations and self-directed 401(k) accounts over a 5-year period. However, managed account participants with lower rates of return still pay substantial additional fees for the service. To further illustrate the effect of fees on account balances, a hypothetical participant who earns a lower managed account rate of return of -0.1 percent would pay $6,900 in additional fees using this record keeper’s average fee over 20 years compared to a participant who self-directed investments in their 401(k) account, and the additional fees would increase to $11,500 at the 1 percent fee level using the lower rate of return. The additional managed account fees, which are charged to participants over and above investment management and administrative fees, can vary substantially, and as a result, some participants pay no fees, others pay a flat fee each year, and still others pay a comparatively large percentage of their account balance for generally similar services from managed account providers. In our case studies, we reviewed the additional fees charged to participants for the service. One managed account provider charges a flat rate and fees for the other seven providers ranged from 0.08 to 1 percent of the participant’s account balance annually, or $8 to $100 on every $10,000 in a participant’s account. Therefore, participants with similar balances but different providers can pay different fees. As shown in table 4, participants with an account balance of $10,000 whose provider charges the highest fee may pay 12.5 times as much as participants whose provider charges the lowest fee ($100 and $8, respectively). However, participants with an account balance of $500,000 may pay up to 250 times as much as other participants but one is subject to a provider who charges the highest fees while the other is at the lowest fee provider ($5,000 and $20, respectively). Participants with large account balances whose managed account provider caps fees at a certain level benefit more than similar participants whose fees are not capped. Of the providers we reviewed who charge variable fees, one provider caps the fee at a certain amount per year. For example, this provider charges 0.25 percent or $25 for every $10,000 in a participant’s account, with a maximum of $250 per year, so participants who use this provider only pay fees on the first $100,000 in their accounts. As a result, the difference in fees paid by participants using this provider, or providers who charge flat rates, widens as participant account balances increase. Plan characteristics can affect fees participants pay to managed account providers. For example, at one managed account provider included in our review, a participant in a small plan may pay more for a managed account than a similar participant in a large plan. Similarly, a participant in a plan with high enrollment or that uses managed accounts as the default may pay less for a managed account than a participant with the same balance in a plan with low enrollment or that offers managed accounts as an opt-in service. We also found through our case studies that fees can vary based on factors beyond the plan’s characteristics, such as the types of providers involved in offering the managed account, the size of participant account balances, and the amount of revenue sharing received by the managed account provider. Fees calculated through revenue sharing can vary in accordance with the investment options the plan sponsor chooses to include in the plan and the amount of revenue the provider actually receives from these options. In these cases, initial fee estimates for the managed account may differ from actual fees they pay. In addition, some plan sponsors also pay fees to offer managed account services, but since these fees may be paid out of plan assets, participants in these plans may pay more than participants in plans that do not pay fees. As shown above, paying higher additional fees to a provider for a managed account service offers no guarantee of higher rates of return compared to other providers or compared to the reported rates of return earned by participants who invest in other professionally managed allocations or who self-direct investments in their 401(k) accounts. Because the additional fee is charged to participants on a recurring basis, such as every quarter or year, the costs incurred over time by participants who use managed accounts can accumulate. We used fee data reported by managed account providers to illustrate the effect that different fees could have on a participant’s managed account balance over time. As shown in figure 10, a hypothetical participant in our illustration who is charged an additional annual fee of 1 percent of their account balance for their managed account may pay nearly $13,000 more over 20 years than they would have paid in any other investment without the managed account fee. This compares to about $1,100 in additional fees paid over 20 years by a participant who is charged an annual fee of 0.08 percent for a managed account, the lowest variable non-capped fee we found. The limited availability of returns-based performance data and lack of standard metrics can also offset the reported advantages of managed accounts. In its final rule on participant-level disclosures, DOL requires that sponsors disclose performance data to help participants make informed decisions about the management of their individual accounts and the investment of their retirement savings, and that sponsors provide appropriate benchmarks to help participants assess the various investment options available under their plan. By requiring sponsors to provide participants with performance data and benchmarking information for 401(k) investments, DOL intends to reduce the time required for participants to collect and organize fee and performance information and increase participants’ efficiency in choosing investment options that will provide the highest value. Since the applicability date of the participant-level disclosure regulation, for most plans in 2012, DOL has required plan sponsors to provide participants who invest in a “designated investment alternative” in their 401(k) account with an annual disclosure describing the fees, expenses, and performance of each of the investment funds available to them in the plan. DOL defines a designated investment alternative as “any investment alternative designated by the plan into which participants and beneficiaries may direct the investment of assets held in, or contributed to, their individual accounts.” For designated investment alternatives, plan sponsors are required to disclose to participants specific information identifying the funds available to them in the plan, results-based performance information over varying time periods, and performance benchmarks in a way that invites comparison with established benchmarks and market indexes, as shown in table 5. Despite DOL’s requirements for designated investment alternatives, with respect to managed accounts offered either as an opt-in or default service, plan sponsors are generally only required to disclose to 401(k) participants the identity of the managed account provider or investment manager and any fees and expenses associated with its management. Neither plan sponsors nor managed account providers are required to isolate within the participant-level disclosure investment-related information on the individual funds that comprise the participant’s managed account or present aggregate performance of the account for a given period. DOL generally does not consider most managed accounts to be “designated investment alternatives.” Instead, according to DOL, managed account providers are generally considered to be “designated investment managers” as they provide a service to participants rather than an investment option, such as a mutual fund. As a result, the investment–related information required in DOL’s participant-level disclosure regulation does not apply to investment services, such as many managed accounts. Because DOL does not require plan sponsors to provide participants information on the performance of their managed accounts or to compare performance against a set of standard benchmarks, it is potentially difficult for participants to evaluate whether the additional fees for managed accounts are worth paying, considering the effect of fees on returns and retirement account balances. As a result, participants may be unable to effectively assess the overall value of the service and to compare performance against a set of standard benchmarks. Not all of the retirement readiness statements we reviewed included returns-based performance data or information on the amount of additional fees the participant had paid for the service. Some managed account providers did include projections of a participant’s future retirement income on these statements. Even though the projections may be based on sound methodologies, if standard returns-based performance data are absent from these statements, participants will have to rely primarily on these projections to gauge the overall value of the service. Without performance and benchmarking information presented in a format designed to help participants compare and evaluate their managed account, participants cannot make informed decisions about the managed account service. Likewise, with respect to QDIAs, DOL only requires plan sponsors to disclose to participants a description of each investment’s objectives, risk and return characteristics (if applicable), fees and expenses paid to providers, and the right of the participant to elect not to have such contributions made on their behalf, among other things. In 2010, DOL proposed amendments to its QDIA disclosure requirements that would, with respect to target date funds or similar investments, require sponsors to provide participants historical returns-based performance data (e.g., 1-, 5-, and 10-year returns). According to DOL officials, the proposed QDIA rule change may apply to managed accounts offered as a QDIA to participants. However, the proposed requirements as written may be difficult for plan sponsors to implement because they are not tailored specifically for managed accounts. One participant advocacy group noted that, without similar information, participants may not be able to effectively assess managed account performance over time and compare that performance to other professionally managed investment options available under the plan or across different managed account providers. As mentioned above, DOL affirms in the participant-level disclosure regulation that performance data are required to help participants in 401(k) plans to make informed decisions about managing investments in their retirement accounts, and that appropriate benchmarks are helpful tools participants can use to assess the various investment options available under their plan. The benefits outlined in the participant-level disclosure regulation would also apply to the proposed changes to the QDIA regulation. Specifically, DOL expects that the enhanced disclosures required by the proposed regulation would benefit participants by providing them with critical information they need to evaluate the quality of investments offered as QDIAs, leading to improved investment results and retirement planning decisions by participants. DOL believes that the disclosures under the proposed regulation, combined with performance reporting requirement in the participant-level disclosure regulation, would allow participants to determine whether efficiencies gained through these investments are worth the price differential participants generally would pay for such funds. However, absent DOL requirements that plan sponsors use standard metrics to report on the performance of managed accounts for participants who are defaulted into the service as a QDIA, it would be potentially difficult for these participants to evaluate the effect that additional fees could have on the performance of their managed accounts, including how the additional fees could affect returns and retirement account balances, possibly eroding the value of the service over time for those participants. Improved performance reporting could help participants understand the risks associated with the additional fees and possible effects on their retirement account balances if the managed accounts underperform, which is critical information that participants could use to take action to mitigate those risks. Discussions with managed account providers suggest that returns-based performance reports and custom benchmarking can be provided to managed account participants. For example, as shown in figure 11, one managed account provider we spoke to already furnishes participants access to online reports that include returns-based performance data and custom benchmarks, which can allow them to compare performance for a given period with an established equity index and bond index. Some providers told us that it would be difficult to provide participants in managed accounts with performance information and benchmarks because their retirement portfolios contain highly personalized asset allocations. While it may be more challenging for providers to furnish performance information on personalized managed accounts compared to model portfolios, we identified one participant statement that included performance information from a provider who personalizes asset allocations for their participants’ retirement portfolios. The provider told us that the blended custom benchmark described in figure 11 allows participants to more accurately evaluate and compare the aggregate performance of the different individual funds held in their managed account because the benchmark is linked to the participant’s risk tolerance. The online report also describes any positive or negative excess returns for the portfolio relative to the return of the custom benchmark, net of fees. The provider said that the excess return statistic is representative of the value that the provider or portfolio manager has added or subtracted from the participant’s portfolio return for a given period. Another managed account provider furnishes retirement readiness statements that include returns-based information for each of the funds in participants’ accounts. However, the statement did not include standard or custom benchmarks that would allow participants to compare the performance of their managed account with other market indexes. Some sponsors report that their choice of a managed account provider may be limited to those options—sometimes only one—offered by the plan’s record keeper. Although DOL’s general guidance on fiduciary responsibilities encourages sponsors to consider several potential providers before hiring one, six of the 10 sponsors we interviewed said that they selected a managed account provider offered by their record keeper without considering other options and two other sponsors said that their record keeper’s capabilities partially restricted their choice of a provider. Some record keepers voluntarily offered sponsors more managed account provider options when sponsors asked for them. In the absence of DOL requiring sponsors to request multiple provider options, sponsors said they were reluctant to pursue options not offered by their record keeper for a variety of reasons. These reasons included: (1) concern that their record keeper’s systems might be unable to support additional options; (2) familiarity with the current provider offered by their record keeper; and (3) belief that there was no need to consider other options—one sponsor said that its record keeper has consistently provided excellent service and support for a reasonable fee and, as a result, the sponsor felt comfortable accepting the record keeper’s recommendation of the provider offered on its recordkeeping system. Without the ability to choose among multiple providers, sponsors have limited choices, which can result in selecting a provider who charges participants higher additional fees than other providers who use comparable strategies to manage participant investments, which are ultimately deducted from participant account balances. In addition, limited choices can result in sponsors selecting a provider whose strategy does not align with their preferred approach for investing participant contributions. For example, a sponsor who endorses a conservative investment philosophy for their plan could select a provider who uses a more aggressive method for managing participant investments. Several managed account providers and record keepers said that a limited number of providers are offered because, among other things, it is costly to integrate 401(k) recordkeeping systems with managed account provider systems. In addition, record keepers may offer a limited number of providers to avoid losing revenue and because they evaluate a provider before deciding to offer its managed account service. Such steps include reviewing the provider’s investment strategy and assessing how the provider interacts with participants. One managed account provider estimated that sponsors might have to spend $400,000 and wait more than a year before offering the provider’s managed account to plan participants if it is not already available on their record keeper’s system. Additionally, record keepers may lose target date fund revenue or forgo higher revenue opportunities by offering certain managed account providers and may believe that offering multiple options is unnecessary once they have identified a provider that is effective. Although sponsors may have access to a limited number of managed account providers on their record keepers’ systems, some providers have developed approaches that make it easier for record keepers to offer more than one managed account option to sponsors. For instance, one provider we interviewed, which acts as an intermediary and fiduciary, contracts with several other providers and makes all of these providers available to its record keepers, thus allowing the record keepers’ sponsors to choose among several managed account providers without incurring additional costs to integrate the record keeper with any of the providers. Another managed account provider has developed a process to transfer information to record keepers that does not require integration with the recordkeeping system, thus making it less difficult for any record keeper to work with them. Available evidence we reviewed suggests that sponsors lack sufficient guidance on how to select and oversee managed account providers. Several of the sponsors we interviewed said that they were unaware of any set list of standards for overseeing managed accounts, so they do not follow any standards, and even managed account providers felt that sponsors have insufficient knowledge and information to effectively select a provider. Because sponsors may not have sufficient knowledge and information, record keepers could play a larger role in the selection process. In addition, providers indicated that it is difficult for sponsors to compare providers and attributed this difficulty to the absence of any widely accepted benchmarks or other comparison tools for sponsors. Some industry representatives indicated that additional guidance could help sponsors better select and oversee managed account providers and highlighted specific areas in which guidance would be beneficial, such as: determining whether a managed account fee is reasonable; understanding managed accounts and how they function; and clarifying factors sponsors should consider when selecting a managed account provider. Although DOL is responsible for assisting and educating sponsors by providing them with guidance, it has not issued guidance specific to managed accounts, as it has done for target date funds, even though it has issued general guidance on fiduciary responsibilities, including regulations under ERISA 404(a) and 404(c), which explicitly state DOL’s long-standing position that nothing in either regulation serves to relieve a fiduciary from its duty to prudently select and monitor any service provider to the plan. DOL guidance on target date funds outlines the factors sponsors should consider when selecting and monitoring target date funds, such as performance and fees, among other things. The absence of similar guidance specific to managed accounts has led to inconsistency in sponsors’ procedures for selecting and overseeing providers and may inhibit their ability to select a provider who offers an effective service for a reasonable fee. Specifically, without assistance regarding what they should focus on, sponsors may not be considering factors that DOL considers relevant for making fiduciary decisions, such as performance information. For example, sponsors considered a range of factors when selecting a managed account provider, including record keeper views on the quality of the provider, the provider’s willingness to serve as a fiduciary, and the managed account provider’s investment strategy. In addition, as shown in table 6, while nearly all of the sponsors said that they considered fees when selecting a managed account provider, only 1 of the 10 sponsors we interviewed said that they considered performance information when selecting a managed account provider. In addition, only half of the sponsors we interviewed reported that they take steps to formally benchmark fees by, for example, comparing their participants’ fees to the amount of fees that participants in similarly-sized organizations pay. The extent to which sponsors oversee managed account providers also varies. Nearly all of the 10 sponsors we interviewed said that they review reports from their managed account provider or record keeper as part of their oversight process, and the managed account providers we interviewed highlighted the role that these reports play in the oversight process. Several of these providers noted that the reports they provide help sponsors fulfill their fiduciary responsibility for oversight. Most sponsors said that they also take other steps to oversee managed account providers, such as regularly meeting with them. However, only one sponsor said that, as part of its oversight activities, it independently evaluates benchmarks, such as stock market performance indexes. In addition, even though participants generally pay an additional fee for managed account services, not all of the sponsors we interviewed said that they monitor fees. Some industry representatives indicated that consistent performance information could help sponsors more effectively compare prospective managed account providers and ultimately improve selection and oversight. Similar to the challenges participants face in evaluating managed accounts because of a lack of performance information, industry representatives said that sponsors need information as well, including: useful, comparative performance information and a standard set of metrics to select suitable providers; access to standard performance benchmarks to monitor them; and access to comparable managed account performance information to evaluate performance. Some providers highlighted challenges with providing performance information on managed accounts and, as a result, furnish sponsors with other types of information to demonstrate their value to participants. For example, providers may not furnish returns-based performance information to demonstrate how their offerings have affected participants because the personalized nature of managed accounts makes it difficult to measure performance. In lieu of providing returns-based performance information, providers furnish sponsors with changes in portfolio risk levels and diversification, changes in participant savings rates, and retirement readiness. One managed account provider said that it does not believe there is a way to measure the performance of managed accounts, noting that it develops 20 to 50 investment portfolios for any given plan based on the investment options available in the plan. Nonetheless, a few providers voluntarily furnish sponsors with returns- based performance information. One provider that used broad-based market indexes and customized benchmarks noted that it would be difficult for a sponsor to select a managed account provider without being able to judge how the provider has performed in the past. In addition, this provider, unlike some other providers, noted that the personalized nature of some managed accounts does not preclude managed account providers from being able to generate returns-based performance information. For example, even though plans may differ, providers can collect information from record keepers for each of the plans that offer managed accounts and create aggregate returns data, which could then be disclosed to sponsors along with an explanation of how the data were generated. As shown in figure 12, the report that this provider distributes to sponsors contains an array of performance information for participant portfolios, including rates of return earned by the portfolios for multiple time periods and benchmarks. In addition, the report provides a description of the benchmarks—broad-based market indexes as well as customized benchmarks. DOL regulations require that service providers furnish sponsors with performance and benchmarking information for the investment options available in the plan. DOL maintains that sponsors need this information in order to make better decisions when selecting and monitoring providers for their plans. However, DOL regulations generally do not require managed account providers to furnish sponsors with performance and benchmarking information for managed accounts because, as previously noted, managed accounts are not considered to be designated investment alternatives. Without this information, sponsors cannot effectively compare different providers when making a selection or adequately determine whether their managed account offerings are having a positive effect on participant retirement savings, as they can currently determine with the designated investment alternatives available in the plan. Managed accounts can be useful services and may offer some advantages for 401(k) participants. They build diversified portfolios for participants, help them make investment decisions, select appropriate asset allocations, and estimate the amount they need to contribute to achieve a secure retirement. Given these potential advantages, it is no surprise that the number of managed account providers has grown and that plan sponsors, seeking to provide the best options for plan participants, have increasingly offered managed accounts. The extent to which managed accounts benefit participants may depend on the participant’s level of engagement and ability to increase their savings. Despite the potential advantages, better protections are needed to ensure that participants realize their retirement goals. These protections are especially important as additional fees for this service can slow or erode participants’ accumulated retirement savings over time. Helping plan sponsors understand and make appropriate decisions about managed accounts can better ensure that participants are able to reap the full advantages of managed accounts. Since plan sponsors select a managed account provider, participants who use these services are subject to that managed account provider’s structure and strategies for allocating participant assets, which can potentially affect participants’ ability to save for retirement, especially if they pay high fees. Some participants cannot be assured that they are receiving impartial managed account services or are able to rely on accountable investment professionals taking on appropriate fiduciary responsibilities. Clarifying fiduciary roles for providers who offer managed accounts to participants on an opt-in basis or for providers who offer additional services to participants in or near retirement could help ensure that sponsors have a clear understanding of provider responsibilities so they can offer the best services to their participants. DOL can also help sponsors gain clarity and confidence in selecting and monitoring managed account providers. This is particularly salient since managed accounts can be complicated service arrangements and there are considerable structural differences among the managed account options offered by providers. By requiring sponsors to request multiple provider options from their record keeper, DOL can help ensure that sponsors thoroughly evaluate managed account providers before they are offered to participants. In addition, providing sponsors with guidance that clarifies standards and suggests actions for prudently selecting and overseeing managed account providers, such as documenting their processes and understanding the strategies used in the managed account, positions sponsors to better navigate their fiduciary responsibilities. Additional guidance also positions sponsors to consider additional factors when choosing to default participants into managed accounts. Supplementing this guidance by requiring providers to furnish consistent performance information to sponsors so that they can more effectively compare providers can assist sponsors in their efforts to provide a beneficial service that could help preserve and potentially enhance participants’ retirement security. Finally, DOL can also help participants evaluate whether their managed account service is beneficial. Without standardized performance and benchmarking information, participants may not be able to effectively assess the performance of their managed account and determine whether the additional fee for the service is worth paying. For participants who opt into managed accounts, this information could help them more effectively assess the performance of their managed account and compare that performance to other professionally managed alternatives that may be less expensive, such as target date funds. Alternatively, for participants who are defaulted into managed accounts, this information could be valuable when they start to pay more attention to their retirement savings. To better protect plan sponsors and participants who use managed account services, we recommend that the Secretary of Labor direct the Assistant Secretary for the Employee Benefits Security Administration (EBSA) to: a) Review provider practices related to additional managed account services offered to participants in or near retirement, with the aim of determining whether conflicts of interest exist and, if it determines it is necessary, taking the appropriate action to remedy the issue. b) Consider the fiduciary status of managed account providers when they offer services on an opt-in basis and, if necessary, make regulatory changes or provide guidance to address any issues. To help sponsors who offer managed account services or who are considering doing so better protect their 401(k) plan participants, we recommend that the Secretary of Labor direct the Assistant Secretary for EBSA to: c) Provide guidance to plan sponsors for selecting and overseeing managed account providers that addresses: (1) the importance of considering multiple providers when choosing a managed account provider, (2) factors to consider when offering managed accounts as a QDIA or on an opt-in basis, and (3) approaches for evaluating the services of managed account providers. d) Require plan sponsors to request from record keepers more than one managed account provider option, and notify the Department of Labor if record keepers fail to do so. To help sponsors and participants more effectively assess the performance of managed accounts, we recommend that the Secretary of Labor direct the Assistant Secretary for EBSA to: e) Amend participant disclosure regulations to require that sponsors furnish standardized performance and benchmarking information to participants. To accomplish this, EBSA could promulgate regulations that would require sponsors who offer managed account services to provide their participants with standardized performance and benchmarking information on managed accounts. For example, sponsors could periodically furnish each managed account participant with the aggregate performance of participants’ managed account portfolios and returns for broad- based securities market indexes and applicable customized benchmarks, based on those benchmarks provided for the plan’s designated investment alternatives. f) Amend service provider disclosure regulations to require that providers furnish standardized performance and benchmarking information to sponsors. To accomplish this, EBSA could promulgate regulations that would require service providers to disclose to sponsors standardized performance and benchmarking information on managed accounts. For example, providers could, prior to selection and periodically thereafter, as applicable, furnish sponsors with aggregated returns for generalized conservative, moderate, and aggressive portfolios, actual managed account portfolio returns for each of the sponsor’s participants, and returns for broad-based securities market indexes and applicable customized benchmarks, based on those benchmarks provided for the plan’s designated investment alternatives. We provided a draft of this report to the Department of Labor, the Department of the Treasury, the Securities and Exchange Commission, and the Consumer Financial Protection Bureau for review and comment. The Department of the Treasury and the Consumer Financial Protection Bureau did not have any comments. DOL and SEC provided technical comments, which we have incorporated where appropriate. DOL also provided written comments, which are reproduced in appendix IV. As stated in its letter, DOL agreed with our recommendations and will consider each of them as it moves forward with a number of projects. In response to our recommendation that DOL review provider practices related to additional managed account services offered to participants in or near retirement to determine whether conflicts of interest exist, DOL agreed to include these practices in its current review of investment advice conflicts of interest, noting that such conflicts continue to be a concern. Regarding our second recommendation, to consider the fiduciary status of managed account providers when they offer services on an opt-in basis, DOL agreed to review existing guidance and consider whether additional guidance is needed in light of the various business models we described. By considering managed account service provider practices and fiduciary roles in its current efforts and taking any necessary action to address potential issues, we believe DOL will help ensure that sponsors and participants receive unconflicted managed account services from qualified managers. DOL also agreed to consider our other recommendations in connection with its current regulatory project on standards for brokerage windows in participant-directed individual account plans. We believe that this project may be a good starting point for requesting additional information and considering adjustments to those managed account services participants obtain from advisers through brokerage windows. As we noted in our report, we did not include these types of managed accounts in our review because the plan sponsor is not usually involved in the selection and monitoring of these advisers. Since participants can obtain managed account services without using a brokerage window, we encourage DOL to also consider our third and fourth recommendations outside of the context of brokerage windows. Providing guidance to sponsors for selecting and overseeing managed account providers, as suggested by our third recommendation, may help sponsors understand their fiduciary responsibilities with respect to managed accounts. Similarly, requiring plan sponsors to ask for more than one choice of managed account provider, as suggested by our fourth recommendation, could encourage record keepers to offer additional choices. By taking the steps outlined in these recommendations, DOL can help ensure that participants are being offered effective managed account services for reasonable fees. With respect to our recommendation requiring plan sponsors to ask for more than one choice of managed account provider, DOL noted that it needs to review the extent of its legal authority to effectively require plans to have more than one managed account service provider. We continue to believe that the action we suggest in our recommendation—that DOL simply require plan sponsors to ask for more than one choice of a provider, which is slightly different than how DOL has characterized it— may be an effective method of broadening plan sponsors’ choices of managed account providers. However, we agree that DOL should examine the scope of its existing authority in considering how it might implement this recommendation. Finally, DOL agreed to consider our recommendations on the disclosure of performance and benchmarking information on managed accounts to participants and sponsors in connection with its open proposed rulemaking project involving the qualified default investment alternative and participant-level disclosure regulations. We believe that DOL’s consideration of these recommendations in connection with this rulemaking project will be helpful for participants and sponsors, and encourage DOL to include managed accounts in this rulemaking. Although managed accounts are different than target date funds in multiple ways, as presented in our report, we believe that managed account providers can and should provide some level of performance and benchmarking information to sponsors—and sponsors to participants—to describe how managed accounts perform over time and the risks associated with the service. In addition, to the extent that managed accounts offered on an opt-in basis are not covered by DOL’s project, we encourage DOL to consider adopting similar changes to the participant- level disclosures for those managed accounts that are not governed by QDIA regulations. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Labor, the Secretary of the Treasury, the Chair of the Securities and Exchange Commission, the Director of the Consumer Financial Protection Bureau, and other interested parties. In addition, the report will be available at no charge on GAO’s website at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512- 7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives for this study were to determine (1) how service providers structure managed accounts, (2) the advantages and disadvantages of managed accounts for 401(k) participants, and (3) the challenges, if any, that plan sponsors face in selecting and overseeing managed account providers. To answer our research objectives we undertook several different approaches. We reviewed relevant research and federal laws, regulations, and guidance on managed accounts in 401(k) plans. We reviewed available documentation on the structure of managed accounts in 401(k) plans and the role of service providers, including Securities and Exchange Commission (SEC) filings of the Form ADV by 30 record keepers, managed account providers, and other related service providers. We interviewed industry representatives and service providers involved with managed accounts—including record keepers, academics, industry research firms, and participant advocacy groups—and government officials from the Department of Labor’s Employee Benefits Security Administration (EBSA), SEC, the Department of the Treasury, and the Consumer Financial Protection Bureau. To examine key issues related to how managed accounts in 401(k) plans are structured, we conducted in-depth case studies of eight selected managed account providers. Since we were unable to identify a comprehensive list of managed account providers that provide services to 401(k) plans, to select providers for case studies we first developed a list of 14 managed account providers based on discussions with two industry research firms and our own analysis of information from record keeper websites and other publicly available documentation. To assess the reliability of these data, we interviewed the two industry research firms and compared their information with the results of our analysis for corroboration and reasonableness. We determined that the data we used were sufficiently reliable for selecting managed account providers for case studies. From the list of 14 providers, we selected 10 providers based on their size, location, and legal and fee structures, from which we used eight as the basis for our case studies. According to our estimates, the eight managed account providers we included in the case studies represented over 95 percent of the managed account industry in defined contribution plans, as measured by assets under management in 2013. In conducting case studies of managed account providers, we interviewed representatives of the managed account provider and chose five providers for site visits based on their locations and size. We also reviewed publicly available documentation describing the nature of the managed account and sample reports furnished by providers, confirmed the type of information these providers consider when managing a participant’s account, and analyzed fee data furnished by managed account providers. To assess the reliability of the fee data furnished by managed account providers, we corroborated and assessed the completeness of reported fee data based on information in provider SEC filings and any other relevant documentary evidence, when possible. We determined that the data were sufficiently reliable for depicting the range and types of fees charged to sponsors and participants by providers. In addition, to further understand the different strategies and structures of managed accounts, we developed and submitted five hypothetical participant scenarios in one hypothetical plan to the eight service providers and asked them to provide example asset allocations, and advice if practical, for those participants. Seven of the eight managed account providers completed and returned asset allocations to us. See appendix II for additional detail on the development of hypothetical scenarios and results from this work. To illustrate potential performance outcomes for participants in managed accounts, we used available data on actual managed account rates of return and fees to show how managed accounts could affect 401(k) account balances over 20 years. We developed two scenarios, isolating the effects of variability in the following factors: 1. Managed account rates of return – We used annual average managed account rates of return ranging from -0.1 percent to 2.4 percent, based on published performance data. We compared the change in account balances for those managed account rates of return with the change in account balances for a 1.4 percent rate of return experienced by participants who directed their own 401(k) investments. 2. Managed account fees – We used different fee levels obtained from published reports and provider interviews ranging from a low additional annual fee of 0.08 percent to a 1 percent annual fee. We compared fee totals and ending account balances for varying fee levels with those of participants who did not pay the additional fee because they directed their own 401(k) investments. For each scenario, we held all other factors constant by assuming that the participant’s starting account balance was $17,000 and starting salary was $40,000, the salary increased at a rate of 1.75 percent per year, and the participant saved 9.7 percent of their salary each year. To the extent possible, we developed scenarios using information provided to us during interviews with industry representatives or found in published reports on managed accounts or on other economic factors. To assess the reliability of these data, we considered the reliability and familiarity of the source of the data or information and, when necessary, interviewed representatives of those sources about their methods, internal controls, and results. Based on these interviews and our review of published data, we determined that the data we used were sufficiently reliable for use in these illustrations. Because this work presents simplified illustrations of potential effects on participants over time, we used nominal dollar amounts over 20 years and did not take into account inflation or changes in interest rates. Similarly, to minimize effects of percentage growth/loss sequencing on account balances, we applied the same rates of return to each of the 20 years for each scenario. The rates of returns we used in both scenarios already incorporated different asset allocations for participants with a managed account or a self-directed 401(k) account. This work does not attempt to specify or adjust these specific asset allocations. To identify the advantages and disadvantages of managed accounts for 401(k) plan participants and any challenges sponsors face in selecting and overseeing managed account providers, we conducted semi- structured interviews with 12 plan sponsors. Our process for interviewing plan sponsors involved multiple steps, as outlined below. Since a comprehensive list of sponsors that managed accounts did not exist at the time of our review, to select sponsors for semi-structured interviews, we conducted a non-generalizable survey facilitated by PLANSPONSOR, a member organization. The survey included questions about sponsors’ 401(k) plans, such as the amount of assets included in the 401(k) plan and the number of participants in the plan, and the reasons why sponsors decided to offer, or not offer, managed accounts to 401(k) plan participants. To minimize errors arising from differences in how survey questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with industry representatives. Based on feedback from these pretests, we revised the survey in order to improve question clarity. PLANSPONSOR included a link to our survey in an e-mail that was sent to approximately 60,000 of its subscribers. In addition, PLANSPONSOR promoted the survey eight times over 4 weeks between June 3 and June 28, 2013. A record keeper and one industry association also agreed to forward a link to our survey to their clients and members, respectively. Fifty-seven sponsors completed our survey, and 25 of them provided contact information, indicating they were willing to speak with us. Forty- eight sponsors indicated that they offer managed accounts to their 401(k) plan participants, and 20 of these sponsors provided us with their contact information. Nine sponsors indicated that they do not offer managed accounts to their 401(k) plan participants, and five of these sponsors provided us with their contact information. We reviewed the survey responses of those sponsors willing to speak with us and selected sponsors to interview based on the following characteristics: Plan size (assets in the plan, number of participants) Managed account provider Enrollment method (Qualified Default Investment Alternative, or QDIA, vs. opt-in) Length of time sponsors have been offering managed accounts To obtain a variety of perspectives, we selected at least two sponsors with any given characteristic to the extent possible. For instance, we selected several (1) sponsors of varying sizes in terms of the amount of assets included in their 401(k) plans and the number of plan participants; (2) sponsors that use different managed account providers; and (3) sponsors that have been offering managed accounts for more than 5 years. Also, we selected one sponsor that offered managed accounts as a default option. In total, we selected 10 sponsors that offer managed accounts and 2 sponsors that do not offer managed accounts, as shown in table 7. We developed semi-structured interview questions to capture information on how sponsors learn about and select managed accounts, how they oversee managed accounts, and the advantages and disadvantages of managed accounts for participants. We developed separate questions for sponsors offering managed accounts and those not offering managed accounts. We shared the interview questions with three sponsors before we began conducting the semi-structured interviews to ensure that the questions were appropriate and understandable. We made no substantive changes to the questions based on this effort. We interviewed 10 sponsors that offer managed accounts and 2 sponsors that do not offer managed accounts. As part of our interview process, we also requested and reviewed relevant documentation from plan sponsors such as quarterly managed account reports from managed account providers or record keepers. As part of our approach for determining the advantages and disadvantages of managed accounts for 401(k) plan participants, we developed a non-generalizable online survey to directly obtain participant perspectives on managed accounts, such as the advantages and disadvantages of managed accounts for 401(k) plan participants and participants’ level of satisfaction with their managed account offering. However, we did not receive any completed responses to our survey. The survey was conducted on a rolling basis from August 1, 2013 to February 25, 2014—a link to the survey was distributed at various points in time. We conducted this performance audit from October 2012 through June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To understand the different strategies and structures of managed accounts, we developed and submitted five hypothetical participant scenarios in one hypothetical plan to the eight managed account providers chosen for our case studies. Table 8 shows basic information provided for the hypothetical participant scenarios 1, 2, and 3. Table 9 shows the additional personalized information provided to managed account providers for hypothetical participant scenarios 1 and 3. Table 10 shows some of the hypothetical plan level information we compiled for scenario development. In addition, to generate hypothetical plan information, we selected 14 hypothetical plan investment options from various asset classes, as shown in table 11. We selected these mutual funds to represent a range of asset classes and based on available information from April 2013 about whether these funds could be found in 401(k) plans. We developed the hypothetical scenarios based on data and information from industry representatives—including research firms, other industry groups, and providers—and a calculator and statistics provided by a number of government agencies. To assess the reliability of these data, we considered the reliability and familiarity of the source of the data or information and, when necessary, interviewed representatives of those sources about their methods, internal controls, and results. We determined that the data we used were sufficiently reliable for developing hypothetical participant- and plan-level scenarios. We asked all eight managed account providers chosen for our case studies to provide example asset allocations and advice, if practical, for all five hypothetical participant scenarios. Seven of the eight managed account providers completed and returned asset allocations to us for the hypothetical scenarios. Five of the seven providers who sent allocations furnished two allocations for each scenario, but each gave different reasons for doing so. One of the providers furnished two allocations for each scenario because they actively manage participant allocations given changes in market conditions and their allocations could generally range within the two extremes. Another provider furnished two allocations for each scenario assuming different initial holdings because, for that provider’s strategy, a person’s initial holdings of plan investment options influence the provider’s recommended allocations, even though both of these allocations have the same overall risk and return characteristics. In some of the figures presenting results of this work, we have included one or both of these two providers’ second allocations. For the other three providers we have chosen to only include one of their asset allocations in the figures presenting the results of this work because they did not pertain to managed account service by itself or they did not include the full services offered by the managed account. We did, however, incorporate the more general understanding we gained from these alternate asset allocations in our report findings. In addition, a number of providers’ systems required that they make certain assumptions about participants outside of the hypothetical scenario information we provided. In these cases, the assumptions they made did differ, sometimes substantially, and this may have affected their asset allocation results. For example, to generate a participant’s goal, providers used varying assumptions of a participant’s annual salary growth—from 1.5 to 3.5 percent. We did not attempt to categorize or eliminate any inconsistencies in provider strategies, but instead report their results to show the variation that a participant may experience. As shown in figure 13, the median values of all providers’ allocations show a downward trend in asset allocations to equity assets and an upward trend in asset allocations to fixed income and or cash-like assets as participants age. For each hypothetical participant, we found that providers varied widely in their recommendations of specific investment options, but participants could be similarly allocated to asset classes, such as cash and cash equivalents, equity, and fixed income. For the hypothetical 30-year-old participant, select asset allocations were presented in the report at figure 5, and all allocations to specific investment options are shown in figure 14. The results were similar for the 45 and 57-year-old hypothetical participants. Starting from an initial asset allocation of 55 percent equity and 45 percent fixed income, providers reported varying asset allocations to investment options for the 45-year-old hypothetical participant, as shown in figure 16, and allocations at the asset class level shown in figure 17. Starting from initial asset allocation of 43 percent equity and 57 percent fixed income, figure 18 shows variation in allocations to investment options for the 57-year-old hypothetical participant and figure 19 shows variation in allocations at the asset class level. Charles A. Jeszeck, Director, (202) 512-7215 or [email protected]. In addition to the individual above, Tamara Cross (Assistant Director), Jessica Gray (Analyst-in-Charge), Ted Burik, Sherwin Chapman, and Laura Hoffrey made significant contributions to this report. In addition, Cody Goebel, Sharon Hermes, Stuart Kaufman, Kathy Leslie, Thomas McCool, Sheila McCoy, Mimi Nguyen, Roger Thomas, Frank Todisco, Walter Vance, and Kathleen Van Gelder also contributed to this report. | 401(k) plan sponsors have increasingly offered participants managed accounts— services under which providers manage participants' 401(k) savings over time by making investment and portfolio decisions for them. These services differ from investment options offered within 401(k) plans. Because little is known about whether managed accounts are advantageous for participants and whether sponsors understand their own role and potential risks, GAO was asked to review these services. GAO examined (1) how providers structure managed accounts, (2) their advantages and disadvantages for participants, and (3) challenges sponsors face in selecting and overseeing providers. In conducting this work, GAO reviewed relevant federal laws and regulations and surveyed plan sponsors. GAO interviewed government officials, industry representatives, other service providers, and 12 plan sponsors of varying sizes and other characteristics. GAO also conducted case studies of eight managed account providers with varying characteristics by, in part, reviewing required government filings. GAO's review of eight managed account providers who, in 2013, represented an estimated 95 percent of the industry involved in defined contribution plans, showed that they varied in how they structured managed accounts, including the services they offered and their reported fiduciary roles. Providers used varying strategies to manage participants' accounts and incorporated varying types and amounts of participant information. In addition, GAO found some variation in how providers reported their fiduciary roles. One of the eight providers GAO reviewed had a different fiduciary role than the other seven providers, which could ultimately provide less liability protection for sponsors for the consequences of the provider's choices. The Department of Labor (DOL) requires managed account providers who offer services to defaulted participants to generally have the type of fiduciary role that provides certain levels of fiduciary protection for sponsors and assurances to participants of the provider's qualifications. DOL does not have a similar explicit requirement for providers who offer services to participants on an opt-in basis. Absent explicit requirements from DOL, some providers may actively choose to structure their services to limit the fiduciary liability protection they offer. According to providers and sponsors, participants in managed accounts receive improved diversification and experience higher savings rates compared to those not enrolled in the service; however, these advantages can be offset by paying additional fees over time. Providers charge additional fees for managed accounts that range from $8 to $100 on every $10,000 in a participant's account. As a result, some participants pay a low fee each year while others pay a comparatively large fee on their account balance. Using the limited fee and performance data available, GAO found that the potential long-term effect of managed accounts could vary significantly, sometimes resulting in managed account participants paying substantial additional fees and experiencing lower account balances over time compared to other managed account participants. Further, participants generally do not receive performance and benchmarking information for their managed accounts. Without this information, participants cannot accurately evaluate the service and make effective decisions about their retirement investments. Even though DOL has required disclosure of similar information for 401(k) plan investments, it generally does not require sponsors to provide this type of information for managed accounts. Sponsors are challenged by insufficient guidance and inconsistent performance information when selecting and overseeing managed account providers. DOL has not issued guidance specific to managed accounts on how sponsors should select and oversee providers, as it has done for other funds. GAO found that the absence of guidance for managed accounts has led to inconsistency in sponsors' procedures for selecting and overseeing providers. Without better guidance, plan sponsors may be unable to select a provider who offers an effective service for a reasonable fee. In addition, DOL generally does not require providers to furnish sponsors with performance and benchmarking information for managed accounts, as it does for investments available in a plan, although some providers do furnish similar information. Without this information, sponsors cannot effectively compare providers when making a selection or determine whether managed accounts are positively affecting participants' retirement savings. Among other things, GAO recommends that DOL consider provider fiduciary roles, require disclosure of performance and benchmarking information to plan sponsors and participants, and provide guidance to help sponsors better select and oversee managed account providers. In response, DOL agreed with GAO's recommendations and will consider changes to regulations and guidance to address any issues. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Department of Defense’s budget is the product of a complex process designed to develop an effective defense strategy that supports U.S. national security objectives. For munitions, the department generally does not have the combatant commands submit separate budgets, but relies on the military services’ budget submissions. Thus, the military services are largely responsible for determining requirements for the types and quantities of munitions that are bought. The Department of Defense Inspector General and GAO have issued numerous reports dating back to 1994 identifying systemic problems—such as questionable and inconsistently applied data, inconsistent processes among and between services, and unclear guidance—that have inflated the services’ requirements for certain categories of munitions and understated requirements for other categories. (For a listing of these reports, see app. II.) In 1997, as one step toward addressing these concerns, the Department of Defense issued Instruction 3000.4, which sets forth policies, roles and responsibilities, time frames, and procedures to guide the services as they develop their munitions requirements. This instruction is referred to as the capabilities-based munitions requirements process and is the responsibility of the Under Secretary of Defense for Acquisition, Technology, and Logistics. The instruction describes a multi-phased analytical process that begins when the Under Secretary of Defense for Policy develops—in consultation with the Chairman of the Joint Chiefs of Staff, the military services, and the combatant commands—policy for the Defense Planning Guidance. The Defense Intelligence Agency uses the Defense Planning Guidance and its accompanying scenarios, as well as other intelligence information, to develop a threat assessment. This assessment contains estimates and facts about the potential threats that the United States and allied forces could expect to meet in war scenarios. The combatant commanders (who are responsible for the theaters of war scenarios), in coordination with the Joint Chiefs of Staff, use the threat assessment to allocate each service a share of the identified targets by phases of the war. The services then develop their combat requirements using battle simulation models and scenarios to determine the number and mix of munitions needed to meet the combatant commanders’ specific objectives. Despite the department’s efforts to standardize the process and generate consistent requirements, many questions have continued to be raised about the accuracy or reliability of the munitions requirements determination process. In April 2001, we reported continuing problems with the capabilities-based munitions requirements determination process because the department (1) had yet to complete a database providing detailed descriptions of the types of targets on large enemy installations that would likely be encountered, based on warfighting scenarios; (2) had not set a time frame for completing its munitions effectiveness database; and (3) was debating whether to include greater specificity in its warfighting scenarios and to rate the warfighting scenarios by the probability of their occurrence. These process components significantly affect the numbers and types of munitions needed to meet the warfighting combatant command’s objectives. The department acknowledged these weaknesses and recognized that inaccurate requirements can negatively affect munitions planning, programming, and budget decisions, as well as assessments of the size and composition of the industrial production base. In responding to our report’s recommendations, the department has taken a number of actions to correct the problems we identified. Our review of the requirements process and related documentation showed that the Department of Defense corrected the previously identified systemic problems in its process for determining munitions requirements, but the reliability of the process continues to be uncertain because of the department’s failure to link the near-term munitions needs of the combatant commands and the purchases made by the military services based on computations derived from the department’s munitions requirements determination process. Because of differences in how requirements are determined, asking a question about the quantities of munitions that are needed can result in one answer from the combatant commanders and differing answers from the military services. For this reason, the combatant commands may report shortages of munitions they need to carry out warfighting scenarios. We believe—and the department’s assessment of its munitions requirements process recognizes—that munitions requirements and purchase decisions made by the military services should be more closely linked to the needs of the combatant commanders. The main issue that the department still needs to address is engaging the combatant commands in the requirements determination process, budgeting processes, and related purchasing decisions to minimize the occurrence of reported shortages. Because of the present gap between the combatant commands’ munitions needs and department’s requirements determination process, which helps shape the services’ purchasing decisions, munitions requirements are not consistently stated, and thus the amount of funding needed to alleviate possible shortages is not always fully understood. In April 2001, we reported that key components of the requirements determination process either had not been completed or had not been decided upon. At that time, the department had not completed a database listing detailed target characteristics for large enemy installations based on warfighting scenarios and had not developed new munitions effectiveness data to address deficiencies identified by the services and the combatant commanders. Additionally, the department had not determined whether to create more detailed warfighting scenarios in the Defense Planning Guidance or to rate scenarios in terms of their probability. We concluded that until these tasks were completed and incorporated into the process, questions would likely remain regarding the accuracy of the munitions requirements process as well as the department’s ability to identify the munitions most appropriate to defeat potential threats. In response to our report, the department took actions during fiscal years 2001 and 2002 to resolve the following three key issues affecting the reliability of the munitions requirements process: List of targets—The department lacked a common picture of the number and types of targets on large enemy installations as identified in the warfighting scenarios, and, as a result, each of the services had been identifying targets on enemy installations differently. To resolve this issue, the Joint Chiefs instructed the Defense Intelligence Agency, in coordination with the combatant commanders, to develop target templates that would provide a common picture of the types of potential targets on enemy installations. In August 2001, the department revised its capabilities-based requirements instruction to incorporate the target templates developed by the Defense Intelligence Agency as the authoritative threat estimate for developing munitions requirements. Munitions effectiveness data—The department was using outdated information to determine the effectiveness of a munition against a target and to predict the number of munitions necessary to defeat it. The department recognized that munitions effectiveness data is a critical component for requirements planning and that outdated information could over- or understate munitions requirements. To address this shortfall, the department updated its joint munitions effectiveness manual with up-to-date munitions effectiveness data for use by the services in their battle simulation models. Warfighting scenarios—The Defense Planning Guidance contains warfighting scenarios that detail conditions that may exist during the conduct of war; these scenarios are developed with input from several sources, including the Defense Intelligence Agency, the Joint Chiefs of Staff, and the services. This guidance should provide a common baseline from which the combatant commands and the services determine their munitions requirements. However, when the department adopted the capabilities-based munitions requirements instruction, details were eliminated in favor of broader guidance. To ensure that the combatant commanders and the services plan for the most likely warfighting scenario and do not use unlikely events to support certain munitions, the department revised the Defense Planning Guidance to provide fewer warfighting scenarios and more detail on each. The department expected that these actions to improve the munitions requirements process would correct over- or understated requirements and provide the combatant commands with needed munitions. However, despite the department’s efforts to enhance the requirements determination process, one problem area remains—inadequate linkage between the near-term munitions needs of the combatant commands and the purchases made by the military services based on computations derived from the department’s munitions requirements determination process. Various actions taken to address this issue have not been successful. The disjunction between the department’s requirements determination processes and combatant commanders’ needs is rooted in separate assessments done at different times. The services, as part of their budgeting processes, develop the department’s munitions requirements using targets provided by the combatant commands (based on the Defense Intelligence Agency’s threat report), battle simulation models, and scenarios to determine the number and mix of munitions needed to meet the combatant commanders’ objectives in each war scenario. To develop these requirements, the services draw upon and integrate data and assumptions from the Defense Planning Guidance, warfighting scenarios, and target allocations, as well as estimates of repair and return rates for enemy targets and projected assessments of damage to enemy targets and installations. Other munitions requirements are also determined, and include munitions needed (1) for forces not committed to support combat operations, (2) for forward presence and current operations, (3) to provide a post-theater of war combat capability, and (4) to train the forces, support service programs, and support peacetime operations. These requirements, in addition to the combat requirement, comprise the services’ total munitions requirement. The total munitions requirement is then compared to available inventory and appropriated funds to determine how many of each munition the services will procure within their specified funding limits and is used to develop the services’ Program Objectives Memorandum and their budget submissions to the President. Periodically the combatant commanders prepare reports of their readiness status, including the availability of sufficient types and quantities of munitions needed to meet the combatant commanders’ warfighting objectives, but these munitions needs are not tied to the services’ munitions requirements or to the budgeting process. In determining readiness, the combatant commanders develop their munitions needs using their own battle simulation models, scenarios, and targets and give emphasis to the munitions they prefer to use or need for unique war scenarios to determine the number and mix of munitions they require to meet their warfighting objectives. The combatant commanders calculate their needs in various ways—unconstrained and constrained and over various time periods (e.g., 30 days and 180 days). Unconstrained calculations are based on the combatant commanders’ assessment of munitions needs, assuming that all needed munitions are available. Constrained calculations represent the combatant commanders’ assessment of munitions needs to fight wars under certain rules of engagement that limit collateral damage and civilian and U.S. military casualties. Because the combatant commanders’ battle simulation models and scenarios differ from those used by the military services, their munitions needs are different, which can result in reports of munitions shortages. In contrast, the U.S. Special Operations Command develops its combat requirements for the number and mix of munitions needed to meet its warfighting objectives using the same battle simulation models and scenarios that the services used and provides these requirements to the services, rather than providing only potential targets to the services as other commands do. This permits the U.S. Special Operations Command to more directly influence the assumptions about specific weapons systems and munitions to be used. As a result of working together, the Command’s and the services’ requirements are the same. In an effort to close the gap between the combatant commanders’ needs and the department’s munitions requirements determination process, a 1999 pilot project was initiated by the department to bridge this gap by better aligning the combatant commanders’ near-term objectives (which generally cover a 2-year period) and the services’ long-term planning horizon (which is generally 6 years). Another benefit of the pilot was that the Joint Chiefs of Staff could validate the department’s munitions requirements by matching requirements to target allocations. However, the Army, the Navy, and a warfighting combatant commander objected to the pilot’s results because it allocated significantly more targets to the Air Force and fewer targets to the Army. Army officials objected that the pilot’s methodology did not adequately address land warfare, which is significantly different from air warfare. The Navy did not concur with the results, citing the lack of recognition for the advanced capabilities of future munitions. U.S. Central Command officials disagreed with the results, stating that a change in methodology should not in and of itself cause the allocation to shift. In July 2000, citing substantial concerns about the pilot, the Under Secretary of Defense for Acquisition, Technology, and Logistics suspended the target allocation for fiscal year 2000 and directed the services to use the same allocations applied to the fiscal year 2002 to the 2007 Program Objectives Memorandum. In August 2000, the Joint Chiefs of Staff made another attempt to address the need for better linkage between the department’s munitions requirements process and the combatant commanders’ munitions needs. The combatant commanders were to prepare a near-term target allocation using a methodology developed by the Joint Chiefs of Staff. Each warfighting combatant commander developed two allocations—one for strike (air services) forces and one for engagement (land troops) forces for his area of responsibility. The first allocated specific targets to strike forces under the assumption that the air services can eliminate the majority of enemy targets. The second allocation assumed that less than perfect conditions exist (such as bad weather), which would limit the air services’ ability to destroy their assigned targets and require that the engagement force complete the mission. The combatant commanders did not assign specific targets to the engagement forces, but they estimated the size of the expected remaining enemy land force. The Army and the Marines then were expected to arm themselves to defeat those enemy forces. The Joint Chiefs of Staff used the combatant commanders’ near- year threat distribution and extrapolated that information to the last year of the Program Objectives Memorandum for the purpose of the services’ munitions requirements planning. The department expected that these modifications would correct over- or understated requirements and bridge the gap between the warfighting combatant commanders’ near-term interests and objectives and the services’ longer planning horizon. However, inadequate linkage remains between the near-term munitions needs of the combatant commands and the department’s munitions requirements determinations and purchases made by the military services. This is sometimes referred to as a difference between the combatant commanders’ near-term focus (generally 2 years) and the services longer- term planning horizon (generally 6 years). However, we believe that there is a more fundamental reason for the disconnect; it occurs because the department’s munitions requirements determination process does not fully consider the combatant commanders’ preferences for munitions and weapon systems to be used against targets identified in projected scenarios. On June 18, 2002, the department contracted with TRW Inc. to assess its munitions requirements process and develop a process that will include a determination of the near-year and out-year munitions requirements. The assessment, which will build upon the capabilities-based munitions requirements process, is also expected to quantify risk associated with any quantity differential associated between requirements and inventory and achieve a balance between inventory, production, and consumption. A final report on this assessment is due in March 2003. The department’s munitions requirements process provides varying answers for current munitions acquisitions because of the inadequate linkage between the near-term munitions needs of the combatant commands and the munitions requirements computed by the military services. As a result, the services are purchasing some critically needed munitions based on available funding and the contractors’ production capacity. For example, in December 2001, both the services and the combatant commanders identified shortages for joint direct attack munitions (a munition preferred by each of the combatant commanders). According to various Department of Defense officials, these amounts differed and exceeded previously planned acquisition quantities. Therefore, the department entered into an agreement to purchase the maximum quantities that it could fund the contractor to manufacture and paid the contractor to increase its production capacity. In such cases, the department could purchase too much or too little, depending upon the quantities of munitions ultimately needed. While this approach may be needed in the short term, it raises questions whether over the long term it would position the services to make the most efficient use of appropriated funds and whether the needs of combatant commands to carry out their missions will be met. Until the department establishes a more direct link between the combatant commanders’ needs, the department’s requirements determinations, and the services’ purchasing decisions, the department will be unable to determine with certainty the quantities and types of munitions the combatant commanders need to accomplish their missions. As a result, the amount of munitions funds needed will remain uncertain, and assessments of the size and composition of the industrial production base will be negatively affected. Unless this issue is resolved, the severity of the situation will again be apparent when munitions funding returns to normal levels and shortages of munitions are identified by the combatant commands. We recommend that the Secretary of Defense establish a direct link between the munitions needs of the combatant commands—recognizing the impact of weapons systems and munitions preferred or expected to be employed—and the munitions requirements determinations and purchasing decisions made by the military services. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Government Affairs and the House Committee on Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. The Director of the Office of the Under Secretary of Defense’s Strategic and Tactical Systems provided written comments on a draft of this report. They are included in appendix III. The Department of Defense concurred with the recommended linkage of munitions requirements and combatant commanders’ needs. The Director stated that the department, through a munitions requirements study directed by the fiscal year 2004 Defense Planning Guidance, has identified this link as a problem and has established a solution that will be documented in the next update of Instruction 3000.4 in fiscal year 2003. The department also provided technical comments, which we incorporated in the report as appropriate. We are sending copies of this report to the appropriate congressional committees and to the Director, Office of Management and Budget. The report is also available on GAO’s Web site at http://www.gao.gov. The scope and methodology of our work is presented in appendix I. If you or your staff have any questions on the matters discussed in this letter, please contact me at (202) 512-4300. Key contributors to this letter were Ron Berteotti, Roger Tomlinson, Tommy Baril, and Nelsie Alcoser. To determine the extent to which improvements had been made to the Department of Defense’s requirements determination process, we reviewed the Department’s Instruction 3000.4, Capabilities-Based Munitions Requirements (to ascertain roles and oversight responsibilities and to identify required inputs into the process); 17 Department of Defense Inspector General reports and 4 General Accounting Office reports relating to the department’s munitions requirements determination process (to identify reported weaknesses in the requirements determination process); and reviewed requirements determinations and related documentation and interviewed officials (to identify actions taken to correct weaknesses in the requirements determination process) from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, D.C.; Joint Chiefs of Staff (Operations, Logistics, Force Structure, Resources and Assessment), Washington, D.C.; and Army, Navy, and Air Force officials responsible for budgeting, buying, and allocating munitions. To determine whether the munitions requirements determination process was being used to guide current munitions acquisitions, we met with the services’ headquarters officials (to determine how each service develops its munitions requirements, to obtain data on the assumptions and inputs that go into its simulation models, to see how each service reviews the outcome of its munitions requirement process, and to determine the basis for recent munitions purchases) and interviewed officials at U.S. Central Command and U.S. Special Operations Command, MacDill Air Force Base, Florida; U.S. Southern Command, Miami, Florida; U.S. Pacific Command; Headquarters Pacific Air Forces; U.S. Army Pacific; Marine Forces Pacific; U.S. Pacific Fleet, Oahu, Hawaii; U.S. Forces Korea; Eighth U.S. Army, Seoul, Korea; and 7th Air Force, Osan, Korea (to determine whether the munitions needed by the warfighters are available). We performed our review from March 2002 through July 2002 in accordance with generally accepted government auditing standards. Defense Logistics: Unfinished Actions Limit Reliability of the Munition Requirements Determination Process. GAO-01-18. Washington, D.C.: April 2001. Summary of the DOD Process for Developing Quantitative Munitions Requirements. Department of Defense Inspector General. Washington, D.C.: February 24, 2000. Air Force Munitions Requirements. Department of Defense Inspector General. Washington, D.C.: September 3, 1999. Defense Acquisitions: Reduced Threat Not Reflected in Antiarmor Weapon Acquisitions. GAO/NSIAD-99-105. Washington, D.C.: July 22, 1999. U.S. Special Operations Command Munitions Requirements. Department of Defense Inspector General. Washington, D.C.: May 10, 1999. Marine Corps Quantitative Munitions Requirements Process. Department of Defense Inspector General. Washington, D.C.: December 10, 1998. Weapons Acquisitions: Guided Weapon Plans Need to be Reassessed. GAO/NSIAD-99-32. Washington, D.C.: December 9, 1998. Navy Quantitative Requirements for Munitions. Department of Defense Inspector General. Washington, D.C.: December 3, 1998. Army Quantitative Requirements for Munitions. Department of Defense Inspector General. Washington, D.C.: June 26, 1998. Management Oversight of the Capabilities-Based Munitions Requirements Process. Department of Defense Inspector General. Washington, D.C.: June 22, 1998. Threat Distributions for Requirements Planning at U.S. Central Command and U.S. Forces Korea. Department of Defense Inspector General. Washington, D.C.: May 20, 1998. Army’s and Marine Corps’ Quantitative Requirements for Blocks I and II Stinger Missiles. Department of Defense Inspector General. Washington, D.C.: June 25, 1996. U.S. Combat Air Power–Reassessing Plans to Modernize Interdiction Capabilities Could Save Billions. Department of Defense Inspector General. Washington, D.C.: May 13, 1996. Summary Report on the Audits of the Anti-Armor Weapon System and Associated Munitions. Department of Defense Inspector General. Washington, D.C.: June 29, 1995. Weapons Acquisition: Precision Guided Munitions in Inventory, Production, and Development. GAO/NSIAD-95-95. Washington, D.C.: June 23, 1995. Acquisition Objectives for Antisubmarine Munitions and Requirements for Shallow Water Oceanography. Department of Defense Inspector General. Washington, D.C.: May 15, 1995. Army’s Processes for Determining Quantitative Requirements for Anti-Armor Systems and Munitions. Department of Defense Inspector General. Washington, D.C.: March 29, 1995. The Marine Corps’ Process for Determining Quantitative Requirements for Anti-Armor Munitions for Ground Forces. Department of Defense Inspector General. Washington, D.C.: October 24, 1994. The Navy’s Process for Determining Quantitative Requirements for Anti-Armor Munitions. Department of Defense Inspector General. Washington, D.C.: October 11, 1994. The Air Force’s Process for Determining Quantitative Requirements for Anti-Armor Munitions. Department of Defense Inspector General. Washington, D.C.: June 17, 1994. Coordination of Quantitative Requirements for Anti-Armor Munitions. Department of Defense Inspector General. Washington, D.C.: June 14, 1994. | The Department of Defense (DOD) planned to spend $7.9 billion on acquiring munitions in fiscal year 2002. Ongoing military operations associated with the global war on terrorism have heightened concerns about the unified combatant commands having sufficient quantities of munitions. Since 1994, the DOD Inspector General and GAO have issued numerous reports identifying weaknesses and expressing concerns about the accuracy of the process used by the department to determine munitions requirements. DOD has improved its munitions requirements process by eliminating most of the systematic problems--correcting questionable and inconsistently applied data, completing target templates, and resolving issues involving the level of detail that should be included in planning guidance. However, a fundamental problem remains unaddressed--inadequate linkage between the near-term munitions needs of the combatant commands and the purchases made by the military services based on computations derived from the department's munitions requirement determination process. The department's munitions requirements process provides varied answers for current munitions acquisitions questions because of the aforementioned disjunction. As a result, the services, in the short term, are purchasing some critically needed munitions based on available funding and contractors' production capacity. Although this approach may be necessary in the short term, it raises questions as to whether over the long term it would position the services to make the most efficient use of appropriated funds and whether the needs of combatant commands to carry out their missions will be met. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Aviation and Transportation Security Act (ATSA), enacted in November 2001, created TSA and gave it responsibility for securing all modes of transportation. As part of this responsibility, TSA oversees security operations at the nation’s more than 400 commercial airports, including establishing requirements for passenger and checked baggage screening and ensuring the security of air cargo transported to, from, and within the United States. TSA has operational responsibility for conducting passenger and checked baggage screening at most airports, and has regulatory, or oversight, responsibility, for air carriers who conduct air cargo screening. While TSA took over responsibility for passenger checkpoint and baggage screening, air carriers have continued to conduct passenger watch-list matching in accordance with TSA requirements, which includes the process of matching passenger information against the No Fly List and Selectee lists before flights depart. TSA is currently developing a program, known as Secure Flight, to take over this responsibility from air carriers for passengers on domestic flights, and plans to assume from the U.S. Customs and Border Protection (CBP) this pre-departure name-matching function for passengers on international flights traveling to or from the United States. Prior to ATSA, passenger and checked baggage screening had been performed by private screening companies under contract to airlines. ATSA established TSA and required it to create a federal workforce to assume the job of conducting passenger and checked baggage screening at commercial airports. The federal screener workforce was put into place, as required, by November 2002. Passenger screening systems are composed of three elements: the people (TSOs) responsible for conducting the screening of airline passengers and their carry-on items, the technology used during the screening process, and the procedures TSOs are to follow to conduct screening. Collectively, these elements help to determine the effectiveness and efficiency of passenger screening operations. TSA’s responsibilities for securing air cargo include, among other things, establishing security rules and regulations governing domestic and foreign passenger air carriers that transport cargo, domestic and foreign all-cargo carriers that transport cargo, and domestic freight forwarders. TSA is also responsible for overseeing the implementation of air cargo security requirements by air carriers and freight forwarders through compliance inspections, and, in coordination with DHS’s Science and Technology (S&T) Directorate, for conducting research and development of air cargo security technologies. Air carriers (passenger and all-cargo) are responsible for implementing TSA security requirements, predominantly through TSA-approved security programs that describe the security policies, procedures, and systems the air carrier will implement and maintain to comply with TSA security requirements. Air carriers must also abide by security requirements issued by TSA through security directives or emergency amendments to air carrier security programs. Air carriers use several methods and technologies to screen domestic and inbound air cargo. These include manual physical searches and comparisons between airway bills and cargo contents to ensure that the contents of the cargo shipment matches the cargo identified in documents filed by the shipper, as well as using approved technology, such as X-ray systems, explosives trace detection systems, decompression chambers, explosive detection systems, and certified explosive detection canine teams. Under TSA’s security requirements for domestic, outbound and inbound air cargo, passenger air carriers are currently required to randomly screen a specific percentage of nonexempt air cargo pieces listed on each airway bill. TSA’s air cargo security requirements currently allow passenger air carriers to exempt certain types of cargo from physical screening. For such cargo, TSA has authorized the use of TSA-approved alternative methods for screening, which can consist of verifying shipper information and conducting a visual inspection of the cargo shipment. TSA requires all-cargo carriers to screen 100 percent of air cargo that exceeds a specific weight threshold. As of October 2006, domestic freight forwarders are also required, under certain conditions, to screen a certain percentage of air cargo prior to its consolidation. TSA, however, does not regulate foreign freight forwarders, or individuals or businesses that have their cargo shipped by air to the United States. Under the Implementing Recommendations of the 9/11 Commission Act of 2007, DHS is required to implement a system to screen 50 percent of air cargo transported on passenger aircraft by February 2009, and 100 percent of such cargo by August 2010. The prescreening of airline passengers who may pose a security risk before they board an aircraft is one of many layers of security intended to strengthen commercial aviation. To further enhance commercial aviation security and in accordance with the Intelligence Reform and Terrorism Prevention Act of 2004, TSA is developing the Secure Flight program to assume from air carriers the function of matching passenger information against government-supplied terrorist watch-lists for domestic flights. TSA expects to assume from air carriers the watch-list matching for domestic flights beginning in January 2009 and to assume this watch-list matching function from CBP for flights departing from and to the United States by fiscal year 2010. TSA has taken steps to strengthen the three key elements of the screening system—people (TSOs and private screeners), screening procedures, and technology—but has faced management, planning and funding challenges. For example, TSA has implemented several efforts intended to strengthen the allocation of its TSO workforce. We reported in February 2004 that staffing shortages and TSA’s hiring process had hindered the ability of some Federal Security Directors (FSD)—the ranking TSA authorities responsible for leading and coordinating security activities at airports—to provide sufficient resources to staff screening checkpoints and oversee screening operations at their checkpoints without using additional measures such as overtime. Since that time, TSA has developed a Staffing Allocation Model to determine TSO staffing levels at airports. FSDs we interviewed during 2006 as part of our review of TSA’s staffing model generally reported that the model is a more accurate predictor of staffing needs than TSA’s prior staffing model. However, FSDs expressed concerns about assumptions used in the fiscal year 2006 model related to the use of part-time TSOs, TSO training requirements, and TSOs’ operational support duties. To help ensure that TSOs are effectively utilized, we recommended that TSA establish a policy for when TSOs can be used to provide operational support. Consistent with our recommendation, in March 2007, TSA issued a management directive that provides guidance on assigning TSOs, through detail or permanent promotion, to duties of another position for a specified period of time. We also recommended that TSA establish a formal, documented plan for reviewing all of the model assumptions on a periodic basis to ensure that the assumptions result in TSO staffing allocations that accurately reflect operating conditions that may change over time. TSA agreed with our recommendation and, in December 2007, developed a Staffing Allocation Model Rates and Assumptions Validation Plan. The plan identifies the process TSA plans to use to review and validate the model’s assumptions on a periodic basis. Although we did not independently review TSA’s staffing allocation for fiscal year 2008, TSA’s fiscal year 2009 budget justification identified that the agency has achieved operational and efficiency gains that enabled them to implement or expand several workforce initiatives involving TSOs. For example, TSA implemented the travel document checker program at over 259 of the approximately 450 airports nationwide during fiscal year 2007. This program is intended to ensure that only passengers with authentic travel documents access the sterile areas of airports and board aircraft. TSA also deployed 643 behavior detection officers to 42 airports during fiscal year 2007. These officers screen passengers by observation techniques to identify potentially high-risk passengers based on involuntary physical and physiological reactions. In addition to TSA’s efforts to strengthen the allocation of its TSO workforce, TSA has taken steps to strengthen passenger checkpoint screening procedures to enhance the detection of prohibited items. However, we have identified areas where TSA could improve its evaluation and documentation of proposed procedures. In April 2007, we reported that TSA officials considered modifications to its standard operating procedures (SOP) based on risk information (threat and vulnerability information), daily experiences of staff working at airports, and complaints and concerns raised by the traveling public. We further reported that for more significant SOP modifications, TSA first tested the proposed modifications at selected airports to help determine whether the changes would achieve their intended purpose, as well as to assess its impact on screening operations. However, we reported that TSA’s data collection and analyses could be improved to help TSA determine whether proposed procedures that are operationally tested would achieve their intended purpose. We also found that TSA’s documentation on proposed modifications to screening procedures was not complete. We recommended that TSA develop sound evaluation methods, when possible, to assess whether proposed screening changes would achieve their intended purpose and generate and maintain documentation on proposed screening changes that are deemed significant. DHS generally agreed with our recommendations and TSA has taken some steps to implement them. For example, for several proposed SOP changes considered during the fall of 2007, TSA provided documentation that identified the sources of the proposed changes and the reasons why the agency decided to accept or reject the proposed changes. With respect to technologies, we reported in February 2007 that S&T and TSA were exploring new passenger checkpoint screening technologies to enhance the detection of explosives and other threats. Of the various emerging checkpoint screening projects funded by TSA and S&T, the explosive trace portal, the bottled liquids scanning device, and Advanced Technology Systems have been deployed to airport checkpoints. A number of additional projects have initiated procurements or are being researched and developed. For example, TSA has procured 34 scanners for screening passenger casts and prosthetic devices to be deployed in July 2008. In addition, TSA has procured 20 checkpoint explosive detection systems and plans to deploy these in August 2008. Further, TSA plans to finish its testing of whole body imagers during fiscal year 2009 and begin deploying 150 of these units by fiscal year 2010. Despite TSA’s efforts to develop passenger checkpoint screening technologies, we reported that limited progress has been made in fielding explosives detection technology at airport checkpoints in part due to challenges S&T and TSA faced in coordinating research and development efforts. For example, we reported that TSA had anticipated that the explosives trace portals would be in operation throughout the country during fiscal year 2007. However, due to performance and maintenance issues, TSA halted the acquisition and deployment of the portals in June 2006. As a result, TSA has fielded less than 25 percent of the 434 portals it projected it would deploy by fiscal year 2007. In addition to the portals, TSA has fallen behind in its projected acquisition of other emerging screening technologies. For example, we reported that the acquisition of 91 whole body imagers was previously delayed in part because TSA needed to develop a means to protect the privacy of passengers screened by this technology. While TSA and DHS have taken steps to coordinate the research, development and deployment of checkpoint technologies, we reported in February 2007 that challenges remained. For example, TSA and S&T officials stated that they encountered difficulties in coordinating research and development efforts due to reorganizations within TSA and S&T. Since our February 2007 testimony, according to TSA and S&T, coordination between them has improved. We also reported that TSA did not have a strategic plan to guide its efforts to acquire and deploy screening technologies, and that a lack of a strategic plan or approach could limit TSA’s ability to deploy emerging technologies at those airport locations deemed at highest risk. TSA officials stated that they plan to submit the strategic plan for checkpoint technologies mandated by Division E of the Consolidated Appropriations Act, 2008, during the summer of 2008. We will continue to evaluate S&T’s and TSA’s efforts to research, develop and deploy checkpoint screening technologies as part of our ongoing review. TSA has taken steps to enhance domestic and inbound air cargo security, but more work remains to strengthen this area of aviation security. For example, TSA has issued an Air Cargo Strategic Plan that focused on securing the domestic air cargo supply chain. However, in April 2007, we reported that this plan did not include goals and objectives for addressing the security of inbound air cargo, or cargo transported into the United States from a foreign location, which presents different security challenges than cargo transported domestically. We also reported that TSA had not conducted vulnerability assessments to identify the range of security weaknesses that could be exploited by terrorists related to air cargo operations. We further reported that TSA had established requirements for air carriers to randomly screen air cargo, but had exempted some domestic and inbound cargo from screening. With respect to inbound air cargo, we reported that TSA lacked an inspection plan with performance goals and measures for its inspection efforts, and recommended that TSA develop such a plan. TSA is also taking steps to compile and analyze information on air cargo security practices used abroad to identify those that may strengthen DHS’s overall air cargo security program, as we recommended. According to TSA officials, the agency’s proposed Certified Cargo Screening Program (CCSP) is based on their review of foreign countries’ models for screening air cargo. TSA officials believe this program will assist the agency in meeting the requirement to screen 100 percent of cargo transported on passenger aircraft by August 2010, as mandated by the Implementing Recommendations of the 9/11 Commission Act of 2007. Through TSA’s proposed CCSP, the agency plans on allowing the screening of air cargo to take place at various points throughout the air cargo supply chain. Under the CCSP, Certified Cargo Screening Facilities (CCSF), such as shippers, manufacturing facilities and freight forwarders that meet security requirements established by TSA, will volunteer to screen cargo prior to its loading onto an aircraft. Due to the voluntary nature of this program, participation of the air cargo industry is critical to the successful implementation of the CCSP. According to TSA officials, air carriers will ultimately be responsible for screening 100 percent of cargo transported on passenger aircraft should air cargo industry entities not volunteer to become a CCSF. In July 2008, however, we reported that TSA may face challenges as it proceeds with its plans to implement a system to screen 100 percent of cargo transported on passenger aircraft by August 2010. Specifically, we reported that DHS has not yet completed its assessments of the technologies TSA plans to approve for use as part of the CCSP for screening and securing cargo. We also reported that although TSA has taken steps to eliminate the majority of exempted domestic and outbound cargo that it has not required to be screened, the agency currently plans to continue to exempt some types of domestic and outbound cargo from screening after August 2010. Moreover, we found that TSA has begun analyzing the results of air cargo compliance inspections and has hired additional compliance inspectors dedicated to air cargo. However, according to agency officials, TSA will need additional air cargo inspectors to oversee the efforts of the potentially thousands of entities that may participate in the CCSP once it is fully implemented. Finally, we reported that more work remains for TSA to strengthen the security of inbound cargo. Specifically, the agency has not yet finalized its strategy for securing inbound cargo or determined how, if at all, inbound cargo will be screened as part of its proposed CCSP. Over the past several years, TSA has faced a number of challenges in developing and implementing an advanced prescreening system, known as Secure Flight, which will allow TSA to assume responsibility from air carriers for comparing domestic passenger information against the No Fly and Selectee lists. We reported in February 2008 that TSA had made substantial progress in instilling more discipline and rigor in developing and implementing Secure Flight, but that challenges remain that may hinder the program’s progress moving forward. For example, TSA had taken numerous steps to address previous GAO recommendations related to strengthening Secure Flight’s development and implementation, as well as additional steps designed to strengthen the program. Among other things, TSA developed a detailed, conceptual description of how the system is to operate, commonly referred to as a concept of operations; established a cost and schedule baseline; developed security requirements; developed test plans; conducted outreach with key stakeholders; published a notice of proposed rulemaking on how Secure Flight is to operate; worked with CBP to integrate the domestic watch list matching function with the international watch list matching function currently operated by CBP; and issued a guide to key stakeholders (e.g., air carriers and CBP) that defines, among other things, system data requirements. Collectively, these efforts have enabled TSA to more effectively manage the program’s development and implementation. However, challenges remain that may hinder the program’s progress moving forward. In February 2008, we reported that TSA had not (1) developed program cost and schedule estimates consistent with best practices; (2) fully implemented its risk management plan; (3) planned for system end-to-end testing in test plans; and (4) ensured that information- security requirements are fully implemented. To address these challenges, we made several recommendations to DHS and TSA to incorporate best practices in Secure Flight’s cost and schedule estimates and to fully implement the program’s risk-management, testing, and information- security requirements. DHS and TSA officials generally agreed with these recommendations. We will continue to evaluate TSA’s efforts to develop and implement Secure Flight as part of our ongoing review. Our work has identified homeland security challenges that cut across DHS’s and TSA’s mission and core management functions. These issues have impeded the department’s and TSA’s progress since its inception and will continue to confront the department as it moves forward. For example, DHS and TSA have not always implemented effective strategic planning efforts and have not yet fully developed performance measures or put into place structures to help ensure that they are managing for results. For example, with regard to TSA’s efforts to secure air cargo, we reported in October 2005 and April 2007 that TSA completed an Air Cargo Strategic Plan that outlined a threat-based risk-management approach to securing the nation’s domestic air cargo system. However, TSA had not developed a similar strategy for addressing the security of inbound air cargo, including how best to partner with CBP and international air cargo stakeholders. In addition, although DHS and TSA have made risk-based decision making a cornerstone of departmental and agency policy, TSA could strengthen its application of risk management in implementing its mission functions. For example, TSA incorporated risk-based decision making when making modifications to airport checkpoint screening procedures, to include modifying procedures based on intelligence information and vulnerabilities identified through covert testing at airport checkpoints. However, in April 2007, we reported that TSA’s analyses that supported screening procedural changes could be strengthened. For example, TSA officials based their decision to revise the prohibited items list to allow passengers to carry small scissors and tools onto aircraft based on their review of threat information—which indicated that these items do not pose a high risk to the aviation system—so that TSOs could concentrate on higher threat items. However, TSA officials did not conduct the analysis necessary to help them determine whether this screening change would affect TSO’s ability to focus on higher-risk threats. We also reported that, although improvements are being made, homeland security roles and responsibilities within and between the levels of government, and with the private sector, are evolving and need to be clarified. For example, we reported that opportunities exist for TSA to work with foreign governments and industry to identify best practices for securing air cargo, and recommended that TSA systematically compile and analyze information on practices used abroad to identify those that may strengthen the department’s overall security efforts. TSA has subsequently reviewed the models used in two foreign countries that rely on government-certified screeners to screen air cargo to facilitate the design of the agency’s proposed CCSP. Regarding efforts to respond to in- flight security threats, which, depending on the nature of the threat, could involve more than 15 federal agencies and agency components, in July 2007, we recommended that DHS and other departments document and share their respective coordination and communication strategies and response procedures, to which DHS agreed. Mr. Chairman this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information on this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Chris Currie; Joe Dewechter; Vanessa DeVeau; Thomas Lombardi; Steve Morris, Assistant Director; Meg Ullengren; and Margaret Vo made contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since its inception in November 2001, the Transportation Security Administration (TSA) has focused much of its efforts on aviation security, and has developed and implemented a variety of programs and procedures to secure the commercial aviation system. TSA funding for aviation security has totaled about $26 billion since fiscal year 2004. This testimony focuses on TSA's efforts to secure the commercial aviation system through passenger screening, strengthening air cargo security, and watch-list matching programs, as well as challenges that remain. It also addresses crosscutting issues that have impeded TSA's efforts in strengthening security. This testimony is based on GAO reports and testimonies issued from February 2004 through July 2008 including selected updates obtained from TSA officials in June and July 2008. DHS and TSA have undertaken numerous initiatives to strengthen the security of the nation's commercial aviation system, including actions to address many recommendations made by GAO. TSA has focused its efforts on, among other things, more efficiently allocating, deploying, and managing the Transportation Security Officer (TSO) workforce--formerly known as screeners; strengthening screening procedures; developing and deploying more effective and efficient screening technologies; strengthening domestic air cargo security; and developing a government operated watch-list matching program, known as Secure Flight. For example, in response to GAO's recommendation, TSA developed a plan to periodically review assumptions in its Staffing Allocation Model used to determine TSO staffing levels at airports, and took steps to strengthen its evaluation of proposed procedural changes. TSA also explored new passenger checkpoint screening technologies to better detect explosives and other threats, and has taken steps to strengthen air cargo security, including increasing compliance inspections of air carriers. Finally, TSA has instilled more discipline and rigor into Secure Flight's systems development, including preparing key documentation and strengthening privacy protections. While these efforts should be commended, GAO has identified several areas that should be addressed to further strengthen security. For example, TSA made limited progress in developing and deploying checkpoint technologies due to planning and management challenges. In addition, TSA faces resource and other challenges in developing a system to screen 100 percent of cargo transported on passenger aircraft in accordance with the Implementing Recommendations of the 9/11 Commission Act of 2007. GAO further identified that TSA faced program management challenges in the development and implementation of Secure Flight, including developing cost and schedule estimates consistent with best practices; fully implementing the program's risk management plan; developing a comprehensive testing strategy; and ensuring that information security requirements are fully implemented. A variety of crosscutting issues have affected DHS's and TSA's efforts in implementing its mission and management functions. For example, TSA can more fully adopt and apply a risk-management approach in implementing its security mission and core management functions, and strengthen coordination activities with key stakeholders. For example, while TSA incorporated risk-based decision making when modifying checkpoint screening procedures, GAO reported that TSA's analyses that supported screening procedural changes could be further strengthened. DHS and TSA have strengthened their efforts in these areas, but more work remains. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Social Security provides retirement, disability, and survivor benefits to insured workers and their dependents. Insured workers are eligible for reduced benefits at age 62 and full retirement benefits between age 65 and 67, depending on their year of birth. Social Security retirement benefits are based on the worker’s age and career earnings, are fully indexed for inflation after retirement, and replace a relatively higher proportion of wages for career low-wage earners. Social Security’s primary source of revenue is the Old Age, Survivors, and Disability Insurance (OASDI) portion of the payroll tax paid by employers and employees. The OASDI payroll tax is 6.2 percent of earnings each for employers and employees, up to an established maximum. One of Social Security’s most fundamental principles is that benefits reflect the earnings on which workers have paid taxes. Social Security provides benefits that workers have earned to some degree because of their contributions and those of their employers. At the same time, Social Security helps ensure that its beneficiaries have adequate incomes and do not have to depend on welfare. Toward this end, Social Security’s benefit provisions redistribute income in a variety of ways—from those with higher lifetime earnings to those with lower ones, from those without dependents to those with dependents, from single earners and two-earner couples to one-earner couples, and from those who do not live very long to those who do. These effects result from the program’s focus on helping ensure adequate incomes. Such effects depend to a great degree on the universal and compulsory nature of the program. According to the Social Security trustees’ 2003 intermediate, or best- estimate, assumptions, Social Security’s cash flow is expected to turn negative in 2018. In addition, all of the accumulated Treasury obligations held by the trust funds are expected to be exhausted by 2042. Social Security’s long-term financing shortfall stems primarily from the fact that people are living longer. As a result, the number of workers paying into the system for each beneficiary has been falling and is projected to decline from 3.3 today to about 2 by 2030. Reductions in promised benefits and/or increases in program revenues will be needed to restore the long-term solvency and sustainability of the program. About one-fourth of public employees do not pay Social Security taxes on the earnings from their government jobs. Historically, Social Security did not require coverage of government employees because they had their own retirement systems, and there was concern over the question of the federal government’s right to impose a tax on state governments. However, virtually all other workers are now covered, including the remaining three-fourths of public employees. The 1935 Social Security Act mandated coverage for most workers in commerce and industry, which at that time comprised about 60 percent of the workforce. Subsequently, the Congress extended mandatory Social Security coverage to most of the excluded groups, including state and local employees not covered by a public pension plan. The Congress also extended voluntary coverage to state and local employees covered by public pension plans. Since 1983, however, public employers have not been permitted to withdraw from the program once they are covered. Also, in 1983, the Congress extended mandatory coverage to newly hired federal workers. The Social Security Administration (SSA) estimates that 5.25 million state and local government employees, excluding students and election workers, are not covered by Social Security. SSA also estimates that annual wages for these noncovered employees totaled about $171 billion in 2002. In addition, 1 million federal employees hired before 1984 are also not covered. Seven states—California, Colorado, Illinois, Louisiana, Massachusetts, Ohio, and Texas—account for more than 75 percent of the noncovered payroll. Most full-time public employees participate in defined benefit pension plans. Minimum retirement ages for full benefits vary; however, many state and local employees can retire with full benefits at age 55 with 30 years of service. Retirement benefits also vary, but they are usually based on a specified benefit rate for each year of service and the member’s final average salary over a specified time period, usually 3 years. For example, plans with a 2-percent rate replace 60 percent of a member’s final average salary after 30 years of service. In addition to retirement benefits, a 1994 U.S. Department of Labor survey found that all members have a survivor annuity option, 91 percent have disability benefits, and 62 percent receive some cost-of-living increases after retirement. In addition, in recent years, the number of defined-contribution plans, such as 401(k) plans and the Thrift Savings Plan for federal employees, has been growing and becoming a relatively more common way for employers to offer pension plans; public employers are no exception to this trend. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses’ or their own earnings in covered employment. SSA estimates that 95 percent of noncovered state and local employees become entitled to Social Security as workers, spouses, or dependents. Their noncovered status complicates the program’s ability to target benefits in the ways it is intended to do. To address the fairness issues that arise with noncovered public employees, Social Security has two provisions—GPO, which addresses spouse and survivor benefits and WEP, which addresses retired worker benefits. Both provisions depend on having complete and accurate information that has proven difficult to get. Also, both provisions are a source of confusion and frustration for public employees and retirees. As a result, proposals have been offered to revise or eliminate both provisions. Under the GPO provision, enacted in 1977, SSA must reduce Social Security benefits for those receiving noncovered government pensions when their entitlement to Social Security is based on another person’s (usually their spouse’s) Social Security coverage. Their Social Security benefits are to be reduced by two-thirds of the amount of their government pension. Under the WEP, enacted in 1983, SSA must use a modified formula to calculate the Social Security benefits people earn when they have had a limited career in covered employment. This formula reduces the amount of payable benefits. Regarding GPO, spouse and survivor benefits were intended to provide some Social Security protection to spouses with limited working careers. The GPO provision reduces spouse and survivor benefits to persons who do not meet this limited working career criterion because they worked long enough in noncovered employment to earn their own pension. Regarding WEP, the Congress was concerned that the design of the Social Security benefit formula provided unintended windfall benefits to workers who spent most of their careers in noncovered employment. The formula replaces a higher portion of preretirement Social Security-covered earnings when people have low average lifetime earnings than it does when people have higher average lifetime earnings. People who work exclusively, or have lengthy careers, in noncovered employment appear on SSA’s earnings records as having no covered earnings or a low average of covered lifetime earnings. As a result, people with this type of earnings history benefit from the advantage given to people with low average lifetime earnings when in fact their total (covered plus noncovered) lifetime earnings were higher than they appear to be for purposes of calculating Social Security benefits. Both GPO and WEP apply only to those beneficiaries who receive pensions from noncovered employment. To administer these provisions, SSA needs to know whether beneficiaries receive such noncovered pensions. However, our prior work found that SSA lacks payment controls and is often unable to determine whether applicants should be subject to GPO or WEP because it has not developed any independent source of noncovered pension information. In that report, we estimated that failure to reduce benefits for federal, state, and local employees caused $160 million to $355 million in overpayments between 1978 and 1995. In response to our recommendation, SSA performed additional computer matches with the Office of Personnel Management to get noncovered pension data for federal retirees in order to ensure that these provisions are applied. These computer matches detected payment errors; correcting these errors will generate hundreds of millions of dollars in savings, according to our estimates. Also, in that report, we recommended that SSA work with the Internal Revenue Service (IRS) to revise the reporting of pension information on IRS Form 1099R, so that SSA would be able to identify people receiving a pension from noncovered employment, especially in state and local governments. However, IRS does not believe it can make the recommended change without new legislative authority. Given that one of our recommendations was implemented but not the other, SSA now has better access to information for federal employees but not for state and local employees. As a result, SSA cannot apply GPO and WEP for state and local government employees to the same degree that it does for federal employees. To address issues such as these, the President’s budget proposes “to increase Social Security payment accuracy by giving SSA the ability to independently verify whether beneficiaries have pension income from employment not covered by Social Security.” In addition to facing administrative challenges, GPO and WEP have also faced criticism regarding their design in the law. For example, GPO does not apply if an individual’s last day of state/local employment is in a position that is covered by Social Security. This GPO “loophole” raises fairness and equity concerns. In the states we visited for a previous report, individuals with a relatively minimal investment of work time and Social Security contributions gained access to potentially many years of full Social Security spousal benefits. To address this issue, the House recently passed legislation that provides for a longer minimum time period in covered employment. At the same time, GPO and WEP have been a source of confusion and frustration for the roughly 6 million workers and nearly 1 million beneficiaries they affect. Critics of the measures contend that they are basically inaccurate and often unfair. For example, some opponents of WEP argue that the formula adjustment is an arbitrary and inaccurate way to estimate the value of the windfall and causes a relatively larger benefit reduction for lower-paid workers. A variety of proposals have been offered to either revise or eliminate them. While we have not studied these proposals in detail, I would like to offer a few observations to keep in mind as you consider them. First, repealing these provisions would be costly in an environment where the Social Security trust funds already face long-term solvency issues. According to SSA and the Congressional Budget Office (CBO), proposals to reduce the number of beneficiaries subject to GPO would cost $5 billion or more over the next 10 years and increase Social Security’s long-range deficit by up to 1 percent. Eliminating GPO entirely would cost $21 billion over 10 years and increase the long-range deficit by about 3 percent. Similarly, a proposal that would reduce the number of beneficiaries subject to WEP would cost $19 billion over 10 years, and eliminating WEP would increase Social Security’s long-range deficit by 3 percent. Second, in thinking about the fairness of the provisions and whether or not to repeal them, it is important to consider both the affected public employees and all other workers and beneficiaries who pay Social Security taxes. For example, SSA has described GPO as a way to treat spouses with noncovered pensions in a fashion similar to how it treats dually entitled spouses, who qualify for Social Security benefits both on their own work records and their spouses’. In such cases, each spouse may not receive both the benefits earned as a worker and the full spousal benefit; rather the worker receives the higher amount of the two. If GPO were eliminated or reduced for spouses who had paid little or no Social Security taxes on their lifetime earnings, it might be reasonable to ask whether the same should be done for dually entitled spouses who have paid Social Security on all their earnings. Far more spouses are subject to the dual-entitlement offset than to GPO; as a result, the costs of eliminating the dual-entitlement offset would be commensurately greater. Aside from the issues surrounding GPO and WEP, another aspect of the relationship between Social Security and public employees is the question of mandatory coverage. Making coverage mandatory has been proposed in the past to help address the program’s financing problems. According to Social Security actuaries, doing so would reduce the 75-year actuarial deficit by 10 percent. Mandatory coverage could also enhance inflation- protection for the affected beneficiaries, improve portability, and add dependent benefits in many cases. However, to provide for the same level of retirement income, mandatory coverage could increase costs for the state and local governments that would sponsor the plans. Moreover, if coverage were extended primarily to new state and local employees, GPO and WEP would continue to apply for many years to come for existing employees and beneficiaries even though they would become obsolete in the long run. While Social Security’s solvency problems have triggered an analysis of the impact of mandatory coverage on program revenues and expenditures, the inclusion of such coverage in a comprehensive reform package would need to be grounded in other considerations. In recommending that mandatory coverage be included in the reform proposals, the 1994-1996 Social Security Advisory Council stated that mandatory coverage is basically “an issue of fairness.” The Advisory Council’s report noted that “an effective Social Security program helps to reduce public costs for relief and assistance, which, in turn, means lower general taxes. There is an element of unfairness in a situation where practically all contribute to Social Security, while a few benefit both directly and indirectly but are excused from contributing to the program.” The impact on public employers, employees, and pension plans would depend on how states and localities with noncovered employees would react to mandatory coverage. Many public pension plans currently offer a lower retirement age and higher retirement income benefit than Social Security. For example, many public employees, especially police and firefighters, retire before they are eligible for full Social Security benefits; new plans that include Social Security coverage might provide special supplemental benefits for those who retire before they could receive Social Security benefits. Social Security, on the other hand, offers automatic inflation protection, full benefit portability, and dependent benefits, which are not available in many public pension plans. Costs could increase by as much as 11 percent of payroll for those states and localities, depending on the benefit package of the new plans that would include Social Security coverage. Alternatively, states and localities that wanted to maintain level spending for retirement would likely need to reduce some pension benefits. Additionally, states and localities could require several years to design, legislate, and implement changes to current pension plans. Finally, mandating Social Security coverage for state and local employees could elicit a constitutional challenge. There are no easy answers to the difficulties of equalizing Social Security’s treatment of covered and noncovered workers. Any reductions in GPO or WEP would ultimately come at the expense of other Social Security beneficiaries and taxpayers. Mandating universal coverage would promise the eventual elimination of GPO and WEP but at potentially significant cost to affected state and local governments, and even so GPO and WEP would continue to apply for some years to come, unless they were repealed. Whatever the decision, it will be important to administer all elements of the Social Security program effectively and equitably. GPO and WEP have proven difficult to administer because they depend on complete and accurate reporting of government pension income, which is not currently achieved. The resulting disparities in the application of these two provisions is yet another source of unfairness in the final outcome. We have made recommendations to the Internal Revenue Service to provide for complete and accurate reporting, but it has responded that it lacks the necessary authority from the Congress. We therefore take this opportunity to bring the matter to the Subcommittee’s attention for consideration. To facilitate complete and accurate reporting of government pension income, the Congress should consider giving IRS the authority to collect this information, which could perhaps be accomplished through a simple modification to a single form. Mr. Chairman, this concludes my statement, I would be happy to respond to any questions you or other members of the Subcommittee may have. For information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, on (202) 512-7215. Individuals who made key contributions to this testimony include Daniel Bertoni and Ken Stockbridge. | Social Security covers about 96 percent of all US workers; the vast majority of the rest are state, local, and federal government employees. While these noncovered workers do not pay Social Security taxes on their government earnings, they may still be eligible for Social Security benefits. This poses difficult issues of fairness, and Social Security has provisions that attempt to address those issues, but critics contend these provisions are themselves often unfair. Congress asked GAO to discuss these provisions as well as the implications of mandatory coverage for public employees. Social Security's provisions regarding public employees are rooted in the fact that about one-fourth of them do not pay Social Security taxes on the earnings from their government jobs, for various historical reasons. Even though noncovered employees may have many years of earnings on which they do not pay Social Security taxes, they can still be eligible for Social Security benefits based on their spouses' or their own earnings in covered employment. To address the issues that arise with noncovered public employees, Social Security has two provisions--the Government Pension Offset (GPO), which affects spouse and survivor benefits, and the Windfall Elimination Provision (WEP), which affects retired worker benefits. Both provisions reduce Social Security benefits for those who receive noncovered pension benefits. Both provisions also depend on having complete and accurate information on receipt of such noncovered pension benefits. However, such information is not available for many state and local pension plans, even though it is for federal pension benefits. As a result, GPO and WEP are not applied consistently for all noncovered pension recipients. In addition to the administrative challenges, these provisions are viewed by some as confusing and unfair, and a number of proposals have been offered to either revise or eliminate GPO and WEP. Such actions, while they may reduce confusion among affected workers, would increase the long-range Social Security trust fund deficit and could create fairness issues for workers who have contributed to Social Security throughout their working lifetimes. Making coverage mandatory has been proposed to help address the program's financing problems, and doing so could ultimately eliminate the need for the GPO and the WEP. According to Social Security actuaries, mandatory coverage would reduce the 75-year actuarial deficit by 10 percent. However, to provide for the same level of retirement income, mandating coverage would increase costs for the state and local governments that would sponsor the plans. Moreover, GPO and WEP would still be needed for many years to come even though they would become obsolete in the long run. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Under its 1957 statute, IAEA is authorized, among other things, to facilitate the peaceful uses of nuclear energy, including the production of electric power, by supplying materials, services, equipment and facilities to its member states, particularly considering the needs of the developing countries. About 90 countries receive technical assistance, mostly through over 1,000 projects in IAEA’s technical cooperation program. IAEA’ s technical cooperation program funds projects in 10 major program areas, including agriculture, the development of member states’ commercial nuclear power programs, and nuclear safety. The average cost of a member state’s technical assistance project is about $60,000. IAEA provided about $800 million in technical assistance to its member states from 1958 through 1996, for equipment, expert services, training, and subcontracts (agreements between IAEA and a third party to provide services to IAEA member states). IAEA’s training activities include fellowships, scientific visits, and training courses. Egypt was the largest recipient of IAEA’s technical assistance overall. About 44 percent of the assistance was spent for equipment, and—from 1980 through 1996—about half of the funds were provided for assistance in three program areas—the application of isotopes and radiation in agriculture, general atomic energy development, and safety in nuclear energy. For 1997 through 1998, IAEA approved $154 million more in technical assistance for its member states. Technical assistance projects are approved by IAEA’s Board of Governors for a 2-year programming cycle, and member states are required to submit written project proposals to IAEA 1 year before the start of the programming cycle. The proposals are appraised for funding by IAEA staff and IAEA member states in terms of the projects’ technical and practical feasibility, national development priorities, and the projects’ long-term advantages to the recipient countries. Because IAEA’s full-scope safeguards, as embodied in the 1970 Treaty on the Non-Proliferation of Nuclear Weapons (NPT), emerged after IAEA was established, all IAEA member states in good standing are eligible for the same privileges, including receiving technical assistance. IAEA does not bar technical assistance for member states that do not have IAEA’s full-scope safeguards or are not parties to the NPT. For example, Pakistan, Israel, and Cuba receive IAEA’s technical assistance but do not have full-scope safeguards and are not parties to the NPT. U.S. participation in IAEA’s technical cooperation program is coordinated through an interagency group—the International Nuclear Technology Liaison Office—which is chaired by the Department of State and includes representatives from the Department of Energy (DOE), the Arms Control and Disarmament Agency (ACDA), and the Nuclear Regulatory Commission (NRC). The United States also maintains a presence at IAEA through the U.S. Mission to the United Nations System Organizations in Vienna, Austria. U.S. contractors from Argonne National Laboratory and the National Academy of Sciences/National Research Council support U.S. training and fellowship activities for the program. In addition to developing and coordinating U.S. policy towards IAEA’s technical cooperation program, the interagency group (1) proposes and recommends U.S. support for specific projects—known as “footnote a” projects—only in IAEA member states that are parties to the NPT or other nuclear nonproliferation treaties;(2) selects courses and participants for U.S.-hosted IAEA training courses and places IAEA fellows at U.S. institutions, such as national laboratories and universities; (3) facilitates purchases of U.S. equipment on behalf of IAEA; (4) recommends U.S. experts and consultants to represent the United States at IAEA meetings, conferences, and symposia; and (5) recruits U.S. nationals to provide expert advice to IAEA and to staff IAEA’s operations. In addition, according to a U.S. Mission official, almost 200 U.S. nationals are employed by IAEA. U.S. officials and representatives of other IAEA major donor countries told us that the principal purpose of IAEA’s technical cooperation program is to help ensure that IAEA member states, many of whom are developing countries, support IAEA’s safeguards and the NPT. Most of the member states participate in IAEA primarily for the nuclear technical assistance it provides. In the past, the United States and other major donors raised concerns about the effectiveness and efficiency of the technical cooperation program. However, since 1992, IAEA has been implementing improvements to the program that the United States and other IAEA member states strongly support. While the United States and other IAEA major donor countries believe that applying safeguards is IAEA’s most important function, most developing countries believe that receiving technical assistance through the technical cooperation program is just as important, and they participate in IAEA primarily for the technical assistance it provides. State Department, ACDA, and NRC officials told us that the principal purpose of U.S. participation in IAEA’s technical cooperation program is to help ensure that IAEA member states, many of whom are developing countries, support IAEA’s nuclear safeguards system and the NPT. A State Department document noted that the United States regarded support for the technical cooperation program to developing countries as the “price tag” for safeguards. At an October 1996 meeting, IAEA’s Director General told us that the opportunity to receive technical assistance dissuades member states from engaging in the proliferation of nuclear weapons. Representatives from four IAEA major donor countries—Australia, Canada, Germany, and Japan—told us that they generally agree with U.S. views that technical assistance is necessary to ensure that developing countries support safeguards and the NPT. However, representatives from six developing countries that have benefited from IAEA’s technical assistance—Argentina, Brazil, China, India, Pakistan, and South Africa—told us that their countries participate in IAEA primarily because their participation enables them to receive technical assistance.According to the representatives from India, Pakistan, and South Africa, IAEA would simply become an international “policing” organization for monitoring compliance with safeguards if IAEA did not provide technical assistance. A U.S. Mission official stated that several member states, including India and Pakistan, would be likely to withdraw from IAEA if its technical assistance were severely scaled back. According to IAEA officials, IAEA carries out its dual responsibilities and manages the competing interests of its member states by maintaining a balance in funding between providing technical assistance and ensuring compliance with safeguards. As figure 1 shows, in 1996, IAEA spent about $97 million on safeguards and about $89 million on technical assistance, accounting for approximately 30 percent and 27 percent, respectively, of IAEA’s total expenditures of about $325 million. Technical assistance ($89.0) Other programs ($67.2) In the past, officials in the United States and other IAEA major donor countries had concerns about the effectiveness and efficiency of the technical cooperation program. A 1993 State Department cable stated that the United States had long been concerned that “footnote a” projects were devoid of significant technical, health, or socioeconomic benefit to the recipient country. Some of the evaluations that we reviewed indicated other deficiencies in the technical cooperation program. For example, an October 1993 special evaluation review of lessons learned from completed evaluation reviews noted that inadequate project plans and designs resulted in implementation problems and delays in 30 percent of the technical assistance projects reviewed from 1988 through 1993. Some of the negative effects IAEA cited that resulted from insufficient project planning included (1) approving a 2-year project without obtaining sufficient evidence about its feasibility; (2) planning research reactor activities that did not yield significant results because they were premature or ambitious in relation to local resources; and (3) conducting nuclear physics projects in Africa that lacked clear results and benefits to the recipient country. IAEA officials in the Department of Technical Cooperation told us they have not prepared a comprehensive report on the accomplishments of the program since its inception in 1958. Although IAEA has provided its member states with detailed descriptions of all of its technical assistance projects, it did not assess the success or failure of these projects in the past. According to the head of IAEA’s Department of Technical Cooperation’s Evaluation Section, evaluations of projects’ impact were not required because IAEA was focusing on the efficiency of projects’ implementation. Moreover, IAEA stated that in 1993, the technical cooperation program’s priorities shifted from implementing research and infrastructure-building activities efficiently to designing projects that have an impact on the end-user and provide nuclear science and technology activities that contribute to national development. IAEA noted that it is unrealistic to expect impact analyses of projects designed and implemented according to standards that did not embody measures of impact at the time. In the year 2000, IAEA plans to review the program’s performance against the criteria for success contained in IAEA’s strategy for technical cooperation. We reviewed 40 reports prepared by IAEA’s Department of Technical Cooperation’s Evaluation Section and summaries of four audits of the program prepared by IAEA’s Office of Internal Audit and Evaluation Support, which covered the period from 1985 through 1996, to determine whether they contained assessments of the program’s effectiveness. We found that most of the 40 reports and audit summaries did not assess the impact of specific technical assistance projects, and no performance criteria had been established to help measure the success or failure of the projects. The evaluations and audits were also limited because insufficient travel funds generally precluded visits by IAEA staff to the recipient nations. We also reviewed the project files for four selected technical assistance projects in Iran, North Korea, Bulgaria, and Egypt that had been completed or canceled by IAEA. None of the project files we reviewed contained information on the project’s accomplishments. Our review of other project files was limited by IAEA’s policy on confidentiality, which regards information obtained by IAEA under a technical cooperation project as belonging to the country receiving the project. Under this policy, IAEA cannot divulge information about a project without the formal consent of the receiving country’s government. Since 1992, IAEA’s Deputy Director General for Technical Cooperation has taken steps to improve the effectiveness and efficiency of the technical cooperation program. For example, IAEA is establishing a system for measuring the quality and performance of some of its technical assistance projects. However, in 1996, IAEA’s Secretariat reported to the Board of Governors that outcomes were still clearly defined for only 25 percent of the 90 technical assistance projects whose results they had monitored from January through October 1996. The Evaluation Section of IAEA’s Department of Technical Cooperation is also helping the department to establish criteria for measuring the results of a project while planning it. The United States and other IAEA major donor countries strongly support IAEA’s efforts to improve the effectiveness and efficiency of the program, but U.S. officials are concerned that all of the improvements may not be fully implemented and made permanent in the 2 years before the term of the current Deputy Director General for Technical Cooperation ends. (App. I discusses the status of IAEA’s efforts to improve the effectiveness and efficiency of the technical cooperation program and the U.S. position on these actions.) According to a State Department cable describing the results of meetings held in September 1996, the major donors in attendance were highly supportive of IAEA’s initiatives to improve the program. The donors concluded that they were under increasing pressure at home to demonstrate that their countries’ contributions to IAEA were being well spent; supportive of the Deputy Director General for Technical Cooperation’s efforts to make the entire technical cooperation program more efficient and effective; concerned because the technical cooperation program had not set priorities or established a schedule for accomplishing improvements to the program; and concerned that IAEA’s Department of Technical Cooperation may not have the management skills required to accomplish these improvements. More recently, during the Board of Governors’ June 1997 meeting, the members highly praised IAEA’s efforts in carrying out its initiatives to improve the effectiveness and efficiency of the technical cooperation program. Most of the funding for IAEA’s technical cooperation program—about 70 percent—comes from voluntary contributions made by member states to IAEA’s technical cooperation fund. In 1996, the United States provided a total of about $99 million to IAEA, which consisted of about $63 million for IAEA’s regular budget and an additional voluntary contribution of $36 million. About $16 million of the $36 million U.S. voluntary contribution to IAEA went to the technical cooperation fund; this contribution represented about 32 percent of the fund, which totaled $49 million. The remainder of the U.S. voluntary contribution to IAEA—about $20 million—was spent on other forms of support for the technical cooperation program, including (1) U.S.-hosted IAEA training courses, (2) “footnote a” projects, (3) placements of IAEA fellows at U.S. institutions, (4) the services of U.S. experts, and (5) support for other IAEA programs, including safeguards. In 1996, the United States was the largest single supplier of equipment for the program. (App. II provides information on the sources of funding for IAEA’s technical assistance program from 1958 through 1996.) Because many IAEA member states are not paying into the technical cooperation fund, the United States and some other major donors are paying for a larger percentage of the fund than designated. IAEA has informally adopted a target funding level for member states’ contributions to the technical cooperation fund. IAEA’s data show that, as of August 1997, 52 of 124 member states had paid into the 1996 technical cooperation fund. The United States and Japan contributed the most, accounting for over half of the total payments to the fund. Seventy-two—or 58 percent—of the member states made no payments at all, yet 57 of these states received technical assistance. In a statement made to IAEA’s Board of Governors in June 1996, the U.S. Ambassador to the U.S. Mission to the United Nations System Organizations in Vienna, Austria, observed that the United States strongly believed that IAEA’s technical assistance should go only to those member states that support technical assistance fully, by paying their fair share. The Ambassador further noted that, because many IAEA member states are not paying their designated share of the technical cooperation fund, some member states, including the United States and Japan, are carrying the program financially, by paying more than their share. (App. III lists the IAEA member states and their shares of and payments to the 1996 technical cooperation fund.) The Ambassador of the Permanent Mission of the Republic of South Africa in Vienna, Austria, who chairs IAEA’s Informal Consultative Working Group on the Financing of Technical Assistance, told us that the group was designed to, among other things, encourage member states to increase their payments to the fund and to review whether member states that have not regularly paid into the fund should receive the benefits of IAEA’s technical assistance. The Ambassador from South Africa also told us that many of the developing countries that are members of IAEA believe that funding for the technical cooperation program should be predictable and assured and have proposed that the program be funded through member states’ contributions to IAEA’s regular budget. The major donors do not support this proposal because they believe that the program will be adequately funded if all member states provide financial support for the program. Representatives of the major recipients of IAEA’s technical assistance, including Argentina, China, Pakistan, and South Africa, told us that they are concerned that some major donors are considering reducing their voluntary contributions to IAEA, which fund the technical cooperation program. Canadian and German representatives told us that their countries may reduce their voluntary contributions to IAEA because of budget constraints. In a statement before the June 1997 meeting of IAEA’s Board of Governors, the Ambassador from South Africa said that the members of the working group were deeply divided on whether to put the technical cooperation fund into IAEA’s regular budget. She believed, however, that IAEA should take member states’ records of payment to the technical cooperation fund into account in deciding upon requests for technical assistance. IAEA officials stated that they took member states’ past payments to the fund into account when preparing for their 1997-98 program. U.S. officials do not systematically review or monitor all of IAEA’s technical assistance projects to ensure that IAEA’s activities do not conflict with U.S. nuclear nonproliferation and safety goals. We found that U.S. officials had sporadically reviewed projects in countries of concern to the United States. Several of IAEA’s technical assistance projects were related to a nuclear power plant under construction in Iran, to uranium prospecting and exploration in North Korea, and to a nuclear power plant whose construction has been suspended in Cuba. These are countries where the United States has concerns about nuclear proliferation and threats to nuclear safety. Moreover, since 1996, a portion of the funds for projects in countries of concern to the United States has come from U.S. voluntary contributions to IAEA. The Special Assistant to the U.S. Representative to IAEA in the State Department’s Bureau of Political-Military Affairs told us that the State Department, in conjunction with its contractor at the Argonne National Laboratory, is chiefly responsible for reviewing IAEA’s technical assistance projects for consistency with U.S. nonproliferation and safety goals before the projects are approved by IAEA’s Board of Governors. However, we found that although U.S. officials at the State Department and U.S. Mission have reviewed technical assistance projects in countries of concern to the United States sporadically, they have not done so systematically. Officials in IAEA’s Department of Technical Cooperation told us that they do coordinate with IAEA’s Department of Safeguards in reviewing projects that may involve the transfer of nuclear materials or other items with implications for proliferation. We also spoke with officials in IAEA’s Department of Safeguards to determine whether they systematically review all of IAEA’s technical assistance projects for consistency with nonproliferation goals. These IAEA officials told us that they do not. We found that the International Nuclear Technology Liaison Office—the interagency group that coordinates U.S. participation in the technical cooperation program and includes representatives from the State Department, DOE, ACDA, and NRC—and the U.S. contractor at Argonne National Laboratory focus their review on the “footnote a” projects that the United States may want to support with U.S. funds. The interagency group does not systematically review the majority of the technical assistance projects that are proposed for funding through IAEA’s technical cooperation fund. Neither does it regularly monitor ongoing projects. An Argonne official informed us that he reviews the list of “footnote a” projects to determine whether they have technical merit and should be funded by the United States; however, he is not responsible for assessing whether these or other projects funded through the technical cooperation fund are in keeping with U.S. nuclear nonproliferation and safety goals. State Department officials in the Bureau of International Organization Affairs told us that the Department did not have the resources to review all of the ongoing technical assistance projects and that U.S. oversight of these projects could be improved. ACDA, DOE, and U.S. Mission officials told us that the vast majority of IAEA’s technical assistance projects do not pose any concerns about nuclear proliferation because the assistance is provided in benign areas, such as medicine and agriculture, that do not involve transferring sensitive nuclear materials and technologies. IAEA’s Director General also told us that IAEA will not provide technical assistance in sensitive areas, such as the reprocessing and enrichment of nuclear material. State Department and U.S. Mission officials told us that if the United States does have concerns about specific technical assistance projects, it can informally raise its objections to IAEA’s Secretariat. However, U.S. officials we spoke with generally could not recall whether the United States had raised objections or had attempted to cancel any projects in the past several years. These U.S. officials also said that the United States does not have absolute control over the approval of specific technical assistance projects because decisions about approving and funding the projects are made collectively every 2 years at the December meeting of IAEA’s Board of Governors. A former U.S. Mission official told us that U.S. Mission representatives can meet informally with IAEA staff to discuss a preliminary list of technical assistance projects months before the Board of Governors’ meeting. The United States and other IAEA member states also have an opportunity to formally review the proposed list of technical assistance projects at IAEA’s General Conference in September and at the November meeting of the Technical Assistance and Cooperation Committee, the final meeting where member states can provide recommendations for the December Board of Governors’ meeting. U.S. officials told us that by the time the list of technical assistance projects reaches the Board of Governors, IAEA member states consider the projects to be approved. The U.S. officials added that it would be rare for representatives from the United States or any other member state to object formally to a specific technical assistance project during a meeting of IAEA’s Board of Governors. Of the total amount in technical assistance (about $800 million) that IAEA provided from 1958 through 1996 for its member states, about $52 million was spent on technical assistance for countries of concern to the United States, as defined by section 307(a) of the Foreign Assistance Act of 1961, as amended. These countries include Cuba, Libya, Iran, Myanmar (formerly Burma), Iraq, North Korea, and Syria. Iran and Cuba ranked 19th and 21st, respectively, among the 120 nations that received assistance over this period, receiving about 1.5 percent each of the total amount in technical assistance that IAEA provided. Projects IAEA provided for these countries involved nuclear training and techniques in medicine and agriculture, including establishing laboratory facilities for the production of radiopharmaceuticals in Iran and using nuclear techniques to improve the fertility of the soil in Iraq and the productivity of the livestock in Libya. (App. IV provides information on the dollar amounts and types of technical assistance that IAEA provided for its members states, including the countries of concern to the United States, from 1958 through 1996.) Although IAEA provides most of its technical assistance in areas that do not generally pose concerns about nuclear proliferation, our review of projects in countries of concern to the United States identified three cases in which IAEA provided technical assistance to countries where the United States has concerns about nuclear proliferation and threats to nuclear safety. A discussion of these three cases follows. The United States strongly opposes the sale of any nuclear-related technology to Iran, including the sale of Russian civilian reactor technology, because the United States believes that any nuclear technology and training could help Iran advance its nuclear weapons program. At an April 1997 hearing on concerns about proliferation associated with Iran, held before the Committee on Foreign Relations, Subcommittee on Near Eastern and South Asian Affairs, the former director of the Central Intelligence Agency stated that through the operation of the Bushehr reactor, the Iranians will develop substantial expertise that will be relevant to the development of nuclear weapons.For 1995 through 1999, IAEA has budgeted about $1.3 million for three ongoing technical assistance projects for the Bushehr nuclear power plant under construction in Iran. As of May 1997, about $250,000 of this amount had been spent for two of these projects. According to IAEA’s project summaries for 1997 through 1998, the three projects are (1) developing a nuclear regulatory infrastructure by training personnel in nuclear safety assessment; (2) establishing an independent multipurpose center that will provide emergency response services, train nuclear regulators, and conduct accident analyses in preparation for licensing the plant; and (3) building the capability of the nuclear technology center in Iran to support the Bushehr plant. (See app. V for more details on the assistance IAEA is providing to Iran for the Bushehr nuclear power plant.) IAEA also spent about $906,000 more for three recently completed technical assistance projects for the Bushehr plant in Iran. According to IAEA’s status reports, the objectives of these projects were (1) to increase the capacity of the Atomic Energy Organization of Iran for evaluating nuclear power plant bids and to develop a regulatory infrastructure and policy; (2) to assist in assessing the status of the Bushehr plant before construction resumed, including advising on nuclear safety criteria for licensing and assisting in developing a national infrastructure for work on the plant’s construction; and (3) to assist in assembling and installing a radioactive waste incinerator for the plant. Under these projects, IAEA has sent experts on numerous missions to conduct safety reviews of the Bushehr plant and has provided equipment, such as computer systems. According to IAEA documents, IAEA believes that this assistance made a valuable contribution to the establishment of an infrastructure for Iran’s nuclear power program. In addition, IAEA cited an on-site assessment of the reactor building and components by Russian contractors as a critical element in the decision to complete the plant. We asked the State Department’s Deputy Assistant Secretary for Nonproliferation for his views on the technical assistance that IAEA has provided for Iran’s Bushehr nuclear power plant. According to his representative in the Bureau of Political-Military Affairs, the Special Assistant to the U.S. Representative to IAEA, the United States, as a general rule, opposes nuclear cooperation with Iran and the State Department would rather not see IAEA provide technical assistance for Iran’s Bushehr nuclear power plant. The State Department official also told us that the United States had informally raised concerns to IAEA about its provision of technical assistance to the Bushehr nuclear power plant. In March 1994, Senator Jesse Helms sent a letter to the President stating his concerns about IAEA’s providing technical assistance for uranium exploration in North Korea at a time when the country was suspected of developing a nuclear weapons program. According to an April 1994 letter to IAEA’s Director General from the U.S. Ambassador to the U.S. Mission, IAEA’s Director General had earlier assured U.S. congressional representatives that IAEA had suspended its technical assistance for North Korea because North Korea was in violation of its obligations under the NPT for failing to comply with IAEA’s safeguards. The U.S. Ambassador to the U.S. Mission stated that he was unaware that several technical assistance projects for North Korea were still ongoing or had recently begun. At the June 1994 meeting of the Board of Governors, the U.S. delegation strongly recommended that IAEA’s Director General suspend the provision of technical assistance to North Korea for all activities related to nuclear material, fuel cycle, and nuclear industrial applications until concerns about North Korea’s compliance with IAEA’s safeguards had been resolved. North Korea withdrew from IAEA in June 1994, and its technical assistance projects were canceled. From 1987 through 1994, IAEA spent about $396,000 in technical assistance for two projects on uranium prospecting and exploration in North Korea. According to IAEA’s April 1997 project status reports, the objectives of these projects were (1) to enable North Korea to better assess the potential of its nuclear raw materials in view of its increasing commitment to nuclear power and (2) to provide support for North Korea’s uranium exploration program. Under the uranium prospecting project, which was completed in 1994, the status report shows that IAEA contributed a considerable amount of uranium exploration equipment to North Korea, as well as a microcomputer and software for data processing. IAEA spent more than one-third of the $87,000 budgeted for the follow-on project on uranium exploration before the project was canceled following North Korea’s withdrawal from IAEA. In March 1997, when we issued our report on IAEA’s nuclear technical assistance for Cuba, including IAEA’s technical assistance to the partially completed nuclear power plant, the State Department’s Deputy Assistant Secretary for Nonproliferation visited IAEA’s Deputy Director General for Technical Cooperation to raise concerns about IAEA’s technical assistance projects for the nuclear power plant. The Deputy Assistant Secretary noted that strong U.S. support for IAEA’s technical cooperation program could be endangered by perceptions that IAEA is supporting Cuban plans to build an unsafe reactor. He also told IAEA’s Deputy Director General for Technical Cooperation that the United States found it hard to justify IAEA’s provision of assistance to Cuba’s nuclear power plant for quality assurance and licensing when, because of financial constraints, it was unlikely that the plant would be completed. However, as of June 1997, IAEA was still conducting these two projects in licensing and quality assurance for the Cuban plant. In our March 1997 report, we noted that, from 1981 through 1993, the United States was required, under section 307(a) of the Foreign Assistance Act of 1961 and related appropriations provisions, to withhold a proportionate share of its voluntary contribution to the technical cooperation fund for Cuba, Libya, Iran, and the Palestine Liberation Organization because the fund provided assistance to these entities. The United States withheld about 25 percent of its voluntary contribution to the fund for these entities. From 1981 through 1995, the State Department withheld a total of over $4 million. State Department officials told us they believe that the withholding was primarily a symbolic gesture that had no practical impact on the total amount of technical assistance that IAEA provided to these countries. On April 30, 1994, the Foreign Assistance Act was amended, and Myanmar (formerly Burma), Iraq, North Korea, and Syria were added to the list of entities from which U.S. funds for certain programs sponsored by international organizations were withheld. At the same time, IAEA was exempted from the withholding requirement. Consequently, as of 1994, the United States was no longer required to withhold a portion of its voluntary contribution to IAEA’s technical cooperation fund for any of these entities. However, State Department officials told us that they misinterpreted the act and continued to withhold funds in 1994 and 1995. Beginning in 1996, the State Department discontinued withholding any of the U.S. voluntary contribution to the fund. The United States and other IAEA major donor countries have had concerns about the effectiveness and efficiency of the technical cooperation program. However, IAEA has taken steps to improve the effectiveness and efficiency of the technical cooperation program and the measurement of the program’s performance. The United States and others strongly support these initiatives, but concerns remain about the sustainability of these improvements. The United States is paying for more than its designated share of the technical cooperation fund because many member states are not paying into the fund. Yet many of these states are receiving the benefits of IAEA’s technical assistance. This is contrary to the State Department’s position that all IAEA member states, particularly those that receive technical assistance, should provide financial support for the program. Although U.S. officials are sporadically reviewing technical assistance projects in countries of concern to the United States, they are neither systematically reviewing technical assistance projects before their approval nor regularly monitoring ongoing technical assistance projects. Without a systematic review, U.S. officials may be unaware of specific instances in which IAEA’s assistance could raise concerns for the United States about nuclear proliferation and threats to nuclear safety. Most of the assistance that IAEA provides is not considered to be sensitive. However, in several cases, the technical assistance that IAEA has provided is contrary to U.S. policy goals. Moreover, since 1996, a portion of the U.S. funding has supported technical assistance projects that will ultimately benefit nuclear programs, training, and techniques in countries of concern to the United States, including Iran and Cuba. To assist the Congress in making future decisions about the continued U.S. funding of IAEA’s technical cooperation program, the Congress may wish to require that the Secretary of State periodically report to it on any inconsistency between IAEA’s technical assistance projects and U.S. nuclear nonproliferation and safety goals. If the Congress wishes to make known that the United States does not support IAEA’s technical assistance projects in countries of concern, as defined by section 307(a) of the Foreign Assistance Act of 1961, as amended, it could explicitly require that the State Department withhold a proportional share of its voluntary funds to IAEA that would otherwise go to these countries. We recommend that the Secretary of State direct the U.S. interagency group on technical assistance, in consultation with the U.S. representative to IAEA, to systematically review all proposed technical assistance projects in countries of concern, as covered by section 307(a) of the Foreign Assistance Act of 1961, as amended, before the projects are approved by IAEA’s Board of Governors, to determine whether the proposed projects are consistent with U.S. nuclear nonproliferation and safety goals. If U.S. officials find that any projects are inconsistent with these goals, we recommend that the U.S. representative to IAEA make the U.S. objections known to IAEA and monitor the projects in these countries. We provided copies of a draft of this report to the Department of State for review and comment. The Department obtained and coordinated comments from Argonne National Laboratory; ACDA; DOE; NRC; the U.S. Mission to the United Nations System Organizations in Vienna, Austria; and IAEA. On August 1, 1997, we met with officials from the Department of State—including the Deputy Director, Office of Technical Specialized Agencies, Bureau of International Organization Affairs—and from the Department of Energy— including a Foreign Affairs Specialist in the Office of Nonproliferation and National Security. The agencies provided clarifying information and technical corrections, which we incorporated into the report. The agencies generally agreed with the facts as presented in the report and made no comments on our recommendations. They did, however, express one concern about our matters for congressional consideration. Specifically, they suggested that withholding a part of the U.S. voluntary contribution to IAEA that is proportional to all of the assistance that IAEA provides to Cuba, North Korea, and other countries of concern would be seen as a politicization of the technical assistance process that could undercut U.S. nonproliferation objectives. The agencies added that they do not object to IAEA’s providing technical assistance to countries of concern in the areas of nuclear safety, medicine and agriculture. We cannot speculate on how others might view such a withholding requirement. However, as discussed in the report, the United States did, from 1981 through 1995, withhold a portion of its voluntary contribution to IAEA, amounting to over $4 million, for technical assistance for countries of concern to the United States. IAEA was exempted from the withholding requirement in 1994, although the State Department continued to withhold funds in 1994 and 1995. Our report also notes the recent introduction into the Congress of a bill proposing that the United States withhold a proportional share of its funds for IAEA’s programs or projects in Cuba. In addition, the agencies said that IAEA’s technical cooperation program, in general, has strongly supported U.S. nuclear safety policy objectives, most notably in Central and Eastern Europe and in the Newly Independent States that operate unsafe Soviet-designed reactors. The agencies further observed that the United States continues to support IAEA’s nuclear safety efforts. In appendix IV, we acknowledge IAEA’s contribution to nuclear safety, noting that from 1958 through 1996, IAEA spent about 16 percent of its technical assistance on safety in nuclear energy. We discussed U.S. participation in IAEA’s technical cooperation program with officials of and gathered data from the Department of State; DOE; ACDA; NRC; Argonne National Laboratory; and the National Academy of Sciences/National Research Council in Washington, D.C., as well as from the U.S. Mission to the United Nations System Organizations and IAEA in Vienna, Austria. We met with IAEA’s Director General; Deputy Directors General for Administration, Research and Isotopes, Nuclear Energy, Nuclear Safety, and Technical Cooperation; the Principal Officer for the Deputy Director General for Safeguards; a Senior Legal Officer in the Department of Administration; and other staff. We reviewed program files at the Department of State and at the U.S. Mission to the United Nations System Organizations in Vienna, Austria. We gathered financial and programmatic data from IAEA on its technical cooperation for the period from 1958, when the program began, until 1996. Programmatic data for the entire period were not always available from IAEA. We did not independently verify the quality and accuracy of IAEA’s data. We also met in Vienna, Austria, with representatives from four of the member states that are major financial donors to the technical cooperation program and six of the states that receive extensive technical assistance or represent the views of the developing countries. The four major donors were Japan, Australia, Canada, and Germany; the six major recipient and/or developing countries were Argentina, Brazil, China, India, Pakistan, and South Africa. We also reviewed 40 reports on various aspects of the technical cooperation program that were prepared by IAEA’s Department of Technical Cooperation’s Evaluation Section; summaries of four audits of the program prepared by IAEA’s Office of Internal Audit and Evaluation Support that covered the period from 1985 through 1996; and four project files for selected technical assistance projects in Iran, North Korea, Bulgaria, and Egypt that were completed or canceled. We reviewed IAEA’s data on the technical assistance projects provided for countries of concern to the United States to determine whether IAEA’s assistance conflicted with U.S. nuclear nonproliferation and safety goals. We observed two meetings of the International Nuclear Technology Liaison Office (the U.S. interagency group that coordinates U.S. participation in IAEA’s technical cooperation program), the November 1996 meeting of the Technical Assistance and Cooperation Committee, and the December 1996 meeting of IAEA’s Board of Governors in Vienna, Austria. We performed our work from July 1996 through August 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of State and Energy, the Chairman of the Nuclear Regulatory Commission, the Director of the Arms Control and Disarmament Agency, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. In 1992, the International Atomic Energy Agency’s (IAEA) Deputy Director General for Technical Cooperation embarked on a series of improvements so that the technical cooperation program would better meet the needs of its recipients and its impact would be measurable. The United States and other IAEA member states strongly support the Deputy Director General’s efforts to improve the program. When IAEA’s current Deputy Director General for Technical Cooperation began his term in 1992, he established a new strategy for improving the effectiveness and efficiency of the program. According to an IAEA paper, the goal of the new strategy is to develop partnerships between IAEA and its member states so that technical assistance produces a “measurable socio-economic impact by directly contributing in a cost-efficient manner to the achievement of the highest development priority of the country.” Important components of the strategy are “model” projects that are expected to respond to a real need of the recipient country, produce a significant economic or social impact by looking beyond the immediate recipient of assistance to the final end user, demonstrate sustainability after the project’s completion through a strong require detailed workplans and objective performance indicators, and demonstrate an indispensable role for nuclear technology with distinct advantages over other approaches. Since 1994, IAEA has initiated nearly 60 model projects, including those under the 1997-98 technical cooperation program. Few model projects have been completed, so it is too early to assess their impact. Nevertheless, some of the model projects that IAEA expects will have measurable results include using a radioimmunoassay to screen for thyroid deficiency in newborn providing nuclear methods to evaluate the effectiveness of a government food supplement intervention program to combat malnutrition in Peru, supporting a program for using nuclear techniques to improve local varieties of sorghum and rice in Mali, and eliminating the tsetse fly from the island of Zanzibar using radiation to sterilize male flies. IAEA is also working to design model projects within a “country program framework.” The goal of this framework is to achieve agreement between IAEA and the recipient country on concentrating technical cooperation on a few high-priority areas where projects produce a significant national impact. IAEA expects to have concluded the frameworks with one-half of the recipients of technical assistance by the year 2000. Like most other IAEA member countries, the United States supports the efforts of IAEA’s Deputy Director General for Technical Cooperation to improve the effectiveness and efficiency of the technical cooperation program. U.S. officials believe that the initiatives and strategic goals of the Technical Cooperation Department and IAEA are extremely significant, particularly now that donor countries’ resources may be declining and the effectiveness and efficiency of all international organizations are being questioned. Since these reform efforts began, the United States has been a strong supporter of the program, making experts available to IAEA, funding specific model projects, and supporting the program in statements before IAEA’s Board of Governors. Although the United States, with other IAEA major donor countries, supports efforts to improve the technical cooperation program, it also shares some concerns with the other major donors about the sustainability of these improvements. State Department officials, including U.S. Mission officials, believe that IAEA must focus on implementation if the efforts at improvement are to last beyond the tenure of the current Deputy Director General, which ends in 1999. According to State Department officials, there is a difference between initiating change and achieving permanent change. These officials have insisted that the Department of Technical Cooperation provide IAEA’s Board of Governors with a strategic plan that will lead to permanent change. Within IAEA, the Department of Technical Cooperation and three other technical departments—the departments of Research and Isotopes, Nuclear Safety, and Nuclear Energy—are the main channels for technology transfer activities within the technical cooperation program. IAEA receives funding for the costs of administration and related support in the Department of Technical Cooperation and for activities in the three technical departments through IAEA’s regular budget. However, most of the funding for IAEA’s technical assistance—about 70 percent—comes from voluntary contributions made by the member states to IAEA’s technical cooperation fund, as figure II.1 shows. In addition to the technical cooperation fund, other sources of voluntary financial support for the program include the following: Extrabudgetary cash contributions are made by member states for specific technical assistance projects—known as “footnote a” projects—and for training. Although “footnote a” projects are considered to be technically sound by IAEA, they are of lower priority to recipient member states than the projects that are financed through the technical cooperation fund. The United States endeavors to provide support for “footnote a” projects in countries that are parties to nonproliferation treaties. Assistance in kind includes equipment donated by member states, expert services, or fellowships arranged on a cost-free basis. The United Nations Development Program (UNDP) provide funds through IAEA for its development projects that IAEA implements in areas involving nuclear science and technology. Member states ($93.1) 7% In-kind ($56.8) UNDP ($84.9) Technical cooperation fund ($558.7) For calendar year 1996, fewer than half of the 124 IAEA member states contributed to the technical cooperation fund. As table III.1 indicates, 52 states contributed a total of about $48.6 million. Of these states, the United States and Japan contributed the most, accounting for over half of the total payments to the fund. Twenty-four member states that contributed to the fund also received about $22.5 million in technical assistance from IAEA. Actual percentage of total payments (continued) In 1996, 72, or about 58 percent, of the 124 IAEA member states did not contribute to the technical cooperation fund. Fifty-seven of these states received a total of $26,039,722 in technical assistance from IAEA, as table III.2 indicates. Myanmar (Burma) (continued) Amount of technical assistance received in 1996 (Table notes on next page) IAEA spent about $800 million on technical assistance for its member states from 1958—when the technical cooperation program began—through 1996, for equipment, expert services, training, and subcontracts. Figure IV.1 shows that about 44 percent of the funds were spent for equipment, such as computer systems and radiation-monitoring and laboratory equipment. In 1996, the United States was the largest single supplier of equipment for IAEA’s technical cooperation program. 8% Training course ($67) 1% Subcontracts ($11) Expert services ($195) Fellowships/scientific visits ($174) Equipment ($346) Of the more than 120 IAEA member states that received IAEA’s technical assistance from 1958 through 1996, 10 states received more than 20 percent of the $800 million given, or about $175.7 million collectively, as table IV.1 indicates. Egypt, which started to receive technical assistance from IAEA in 1970, has received the largest total amount. About half—or $334 million—of the $648 million that IAEA spent for technical assistance from 1980 through 1996 was provided for three program areas—the application of isotopes and radiation in agriculture, general atomic energy development, and safety in nuclear energy—as figure IV.2 shows. Moreover, two other program areas—nuclear engineering and technology, and the application of isotopes and radiation in industry and hydrology—received about 26 percent of the funds, for a total of about $169 million. IAEA approved about $154 million more in technical assistance projects for its member states for 1997 through 1998. Over half of this additional assistance will be provided for the application of isotopes and radiation in medicine, agriculture, and safety in nuclear energy. Of the about $800 million in technical assistance provided by IAEA to all of its member states from 1958 through 1996, about $52 million was spent on countries currently of concern to the United States. As table IV.2 indicates, most assistance given to these countries was in the form of equipment. In 1973, a German firm began the construction of two reactors in Iran near Bushehr, but construction was halted during the Islamic Revolution in 1979. In 1995, Iran and Russia reached an $800 million agreement for the Ministry of the Russian Federation for Atomic Energy (MINATOM) to resume the construction of Unit 1 of the Bushehr nuclear power plant and to switch from a German-designed to a Russian-designed VVER-1000 model reactor. According to IAEA’s project summaries for the proposed 1997-98 program, the decision to resume the Bushehr project with a new design has placed heavy responsibility on Iran’s Nuclear Safety Department, the regulatory body of the Atomic Energy Organization of Iran. For 1995 through 1999, IAEA budgeted about $1.3 million for three ongoing technical assistance projects for the Bushehr nuclear power plant under construction in Iran. As of May 1997, about $250,000 of this amount had been spent for two of these projects. According to IAEA’s project summaries for 1997-98, the three projects are (1) developing a nuclear regulatory infrastructure by training personnel in nuclear safety assessment; (2) establishing an independent multipurpose center that will provide emergency response services, train nuclear regulators, and analyze accidents in preparation for licensing the plant; and (3) building the capability of the Esfahan Nuclear Technology Center in Iran to support the Bushehr plant. This ongoing project was originally approved in 1995 and is partly a continuation of another project—completed in 1995 for about $77,000—to increase the capability of staff at the Atomic Energy Organization of Iran to evaluate nuclear power plant bids and to develop a regulatory infrastructure and policy. The aim of the ongoing project is to develop a nuclear regulatory infrastructure by training personnel in nuclear safety assessment and in operator responsibilities. Under the project, IAEA has sent experts on numerous missions to Iran to provide advice and training in quality assurance, project management, and site and safety reviews; has provided supplies such as books and journals; and has sponsored some fellowships and scientific visits. A workshop for the top management of Iran’s atomic energy authority was held on quality assurance in 1995. Eight reports have been prepared under the project by experts on topics such as quality assurance, a preliminary safety review of the plant, and a review of seismic hazard studies at the plant site. As of May 1997, IAEA had spent about $241,000 for expert services, equipment (supplies), and fellowships—or about half of the approximately $494,000 that it plans to spend through 1998, as indicated in table V.1. This new model project, which was approved under IAEA’s 1997-98 technical cooperation program, is intended to improve the overall safety of the plant by establishing an independent multipurpose center that will provide emergency response services, train regulators, and analyze accidents. IAEA will furnish experts to advise, assist, and provide training in the following areas: (1) identify safety features and evaluate them in the context of the VVER-1000 design for formulating the regulatory requirements; (2) formulate a safety policy and associated licensing and supervisory procedures for the completion of the plant; (3) train regulatory staff; (4) evaluate submitted regulatory documents; and (5) establish a national regulatory inspectorate to carry out inspections during the design, construction, commissioning, and operation of the plant. IAEA has already sent a number of experts on missions to Iran as a part of the project. IAEA expects that the project will help the national regulatory body to discharge its statutory responsibilities for ensuring that the plant is constructed according to regulatory standards conducive to safe operation. As of May 1997, IAEA had provided approximately $8,440 in expert services and was planning to provide a total of approximately $403,000 for expert services and fellowships though 1999. Another new project for the plant, which was approved under IAEA’s 1997-98 technical cooperation program, will enhance the ability of Iran’s Esfahan Nuclear Technology Center to support the Bushehr plant. IAEA’s project summary states that while Iran’s nuclear technology center has adequate technical and scientific expertise on nuclear safety and quality assurance to support Iran’s nuclear regulatory body and the plant, the center has asked for IAEA’s expert advice and transfer of up-to-date knowledge. IAEA will provide expert services to help the center analyze the capabilities of the power plant and will provide training in reactor safety analysis and reactor technology. According to the project summary, this project will develop expertise at the center in safety analysis and other technical expertise for the Bushehr plant. IAEA plans to provide a total of $400,800 for expert services and fellowships for the project by 1999. Nuclear Nonproliferation: Implementation of the U.S./North Korean Agreed Framework on Nuclear Issues (GAO/RCED/NSIAD-97-165, June 2, 1997). International Organizations: U.S. Participation in the United Nations Development Program (GAO/NSIAD-97-8, Apr. 17, 1997). Nuclear Safety: International Atomic Energy Agency’s Nuclear Technical Assistance for Cuba (GAO/RCED-97-72, Mar. 24, 1997). Nuclear Safety: Uncertainties About the Implementation and Costs of the Nuclear Safety Convention (GAO/RCED-97-39, Jan. 2, 1997). Nuclear Safety: Status of U.S. Assistance to Improve the Safety of Soviet-Designed Reactors (GAO/RCED-97-5, Oct. 29, 1996). Nuclear Nonproliferation: Implications of the U.S./North Korean Agreement on Nuclear Issues (GAO/RCED/NSIAD-97-8, Oct. 1, 1996). Nuclear Safety: Concerns With the Nuclear Power Reactors in Cuba (GAO/T-RCED-95-236, Aug. 1, 1995). Nuclear Safety: U.S. Assistance to Upgrade Soviet-Designed Nuclear Reactors in the Czech Republic (GAO/RCED-95-157, June 28, 1995). Nuclear Safety: International Assistance Efforts to Make Soviet-Designed Reactors Safer (GAO/RCED-94-234, Sept. 29, 1994). Foreign Assistance: U.S. Participation in FAO’s Technical Cooperation Program (GAO/NSIAD-94-32, Jan. 11, 1994). Nuclear Nonproliferation and Safety: Challenges Facing the International Atomic Energy Agency (GAO/NSIAD/RCED-93-284, Sept. 22, 1993). Nuclear Safety: Progress Toward International Agreement to Improve Reactor Safety (GAO/RCED-93-153, May 14, 1993). Nuclear Safety: Concerns About the Nuclear Power Reactors in Cuba (GAO/RCED-92-262, Sept. 24, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined: (1) the purpose and effectiveness of the International Atomic Energy Agency's (IAEA) technical cooperation program; (2) the cost of U.S. participation in IAEA's technical cooperation program; and (3) whether the United States ensures that the activities of IAEA's technical cooperation program do not conflict with U.S. nuclear nonproliferation and safety goals. GAO found that: (1) while the United States and other IAEA major donor countries believe that applying safeguards is IAEA's most important function, most developing countries believe that receiving technical assistance through IAEA's technical cooperation program is just as important; (2) the United States and other major donors principally participate in the program to help ensure that the member states fully support IAEA's safeguards and the 1970 Treaty on the Non-Proliferation of Nuclear Weapons; (3) in the past, concerns were raised about the effectiveness and efficiency of the technical cooperation program; (4) most of IAEA's program evaluation reports, internal audits, and project files that GAO reviewed did not assess the impact of the technical cooperation program, and no performance criteria had been established to help measure the success or failure of the program; (5) for the past 5 years, IAEA's Deputy Director General for Technical Cooperation has been taking steps to improve the overall effectiveness and efficiency of the program, but State Department officials are concerned about their sustainability; (6) the United States, historically the largest financial donor to the fund, provided a voluntary contribution of about $16 million, or about 32 percent of the total $49 million paid by IAEA member states for 1996; (7) for 1996, 72 of the 124 member states made no payments at all to the technical cooperation fund yet most of these states received technical assistance from IAEA; (8) officials from the Department of State, the Arms Control and Disarmament Agency, and the U.S. Mission to the United Nations System Organizations in Vienna, Austria, told GAO that they do not systematically review or monitor all of IAEA's technical assistance projects to ensure that they do not conflict with U.S. nuclear nonproliferation or safety goals; (9) however, GAO found that U.S. officials had sporadically reviewed projects in countries of concern to the United States; (10) U.S. officials also told GAO that the vast majority of IAEA's technical assistance projects do not pose any concerns about nuclear proliferation because the assistance is generally in areas that do not involve the transfer of sensitive nuclear materials and technologies; (11) however, GAO found that IAEA has provided nuclear technical assistance projects for countries where the United States is concerned about nuclear proliferation and threats to nuclear safety; and (12) moreover, a portion of the funds for projects in countries of concern is coming from U.S. voluntary contributions to IAEA. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In March 2008, we reported that the IRIS database—a critical component of EPA’s capacity to support scientifically sound risk management decisions, policies, and regulations—was at serious risk of becoming obsolete because the agency had not been able to complete timely, transparent, and credible chemical assessments or decrease its backlog of ongoing assessments. In addition, assessment process changes EPA had recently made, as well as other changes EPA was considering at the time of our review, would have further reduced the credibility, transparency, and timeliness of IRIS assessments. Among other things, we concluded the following: EPA’s efforts to finalize IRIS assessments have been impeded by a combination of factors. These factors include (1) the Office of Management and Budget’s (OMB) requiring two additional reviews of IRIS assessments by OMB and other federal agencies with an interest in the assessments, such as the Department of Defense, and (2) EPA management decisions, such as delaying some assessments to await the results of new research. The two new OMB/interagency reviews of draft assessments involve other federal agencies in EPA’s IRIS assessment process in a manner that limits the credibility of IRIS assessments and hinders EPA’s ability to manage them. For example, some of the agencies participating in these reviews could face increased cleanup costs and other legal liabilities if EPA issued an IRIS assessment for a chemical that resulted in a decision to regulate the chemical to protect the public. Moreover, the input these agencies provide to EPA is treated as “deliberative” and is not released to the public. Regarding EPA’s ability to manage IRIS assessments, without communicating its rationale for doing so, OMB required EPA to terminate five assessments that for the first time addressed acute, rather than chronic, exposure—even though EPA had initiated this type of assessment to help it implement the Clean Air Act. The changes to the IRIS assessment process that EPA was considering, but had not yet issued at the time of our 2008 review, would have added to the already unacceptable level of delays in completing IRIS assessments and would have further limited the credibility of the assessments. For example, the changes would have allowed potentially affected federal agencies to have assessments suspended for up to 18 months to conduct additional research. As we reported in 2008, even one delay can have a domino effect, requiring the assessment process to essentially be repeated to incorporate changing science. In April 2008, EPA issued a revised IRIS assessment process. The process was largely the same as the draft process we had evaluated during our review and did not respond to the recommendations in our March 2008 report. Moreover, some key changes were likely to further exacerbate the productivity and credibility concerns we initially identified. For example, EPA’s revised process formally defined comments on IRIS assessments from OMB and other federal agencies as “deliberative” and excluded them from the public record. As we stated in our report, it is critical that input from all parties—particularly agencies that may be affected by the outcome of IRIS assessments—be publicly available. In addition, we concluded that the estimated time frames under the revised process, especially for chemicals of key concern, would likely perpetuate the cycle of delays to which the majority of ongoing assessments have been subject. Instead of streamlining the process, as we had recommended, EPA institutionalized a process that from the outset was estimated to take 6 to 8 years for some widely used chemicals that are likely to cause cancer or other serious health effects. This was particularly problematic because of the substantial rework such cases often require to take into account changing science and methodologies. Largely as a result of EPA’s lack of responsiveness, we added transforming EPA’s processes for assessing and controlling toxic chemicals as a high- risk area in our January 2009 biennial status report on governmentwide high-risk areas requiring increased attention by executive agencies and Congress. Taking positive action, on May 21, 2009, EPA issued a new IRIS assessment process, effective immediately. In a memorandum announcing the reforms to the IRIS assessment process, the EPA Administrator echoed our prior findings that the April 2008 changes to the process reduced the transparency, timeliness, and scientific integrity of the IRIS process. She noted that the President’s recent emphasis on the importance of transparency and scientific integrity in government decision making compelled a rethinking of the IRIS process. If effectively implemented, the new process would be largely responsive to the recommendations outlined in our March 2008 report. First, the new process and the memorandum announcing it indicate that the IRIS assessment process will be entirely managed by EPA, including the interagency consultations (formerly called OMB/interagency reviews). Under EPA’s prior process, these two interagency reviews were required and managed by OMB—and EPA was not allowed to proceed with assessments at various stages until OMB notified EPA that it had sufficiently responded to comments from OMB and other agencies. The independence restored to EPA under the new process is critical in ensuring that EPA has the ability to develop transparent, credible IRIS chemical assessments that the agency and other IRIS users, such as state and local environmental agencies, need to develop adequate protections for human health and the environment. Second, the new process addresses a key transparency concern highlighted in our 2008 report and testimonies. As we recommended, it expressly requires that all written comments on draft IRIS assessments provided during the interagency consultation process by other federal agencies and White House offices be part of the public record. Third, the new process streamlines the previous one by consolidating and eliminating some steps. Importantly, EPA eliminated the step under which other federal agencies could have IRIS assessments suspended in order to conduct additional research, thus returning to EPA’s practice in the 1990s of developing assessments on the basis of the best available science. As we highlighted in our report, as a general rule, requiring that IRIS assessments be based on the best science available at the time of the assessment is a standard that best supports the goal of completing assessments within reasonable time periods and minimizing the need to conduct significant levels of rework. Fourth, as outlined in the EPA Administrator’s memorandum announcing the new IRIS process, the President’s fiscal year 2010 budget request includes an additional $5 million and 10 full-time-equivalent staff positions for the IRIS program, which is responsive to our recommendation to assess the level of resources that should be dedicated in order to meet user needs and maintain a viable IRIS database. We are encouraged by the efforts EPA has made to adopt most of our recommendations, including those addressing transparency practices and streamlining the lengthy IRIS assessment process. The changes outlined above reflect a significant redirection of the IRIS process that, if implemented effectively, can help EPA restore the integrity and productivity of this important program. Nevertheless, on the basis of our preliminary review of the new IRIS assessment process, we have some initial questions that EPA may wish to consider as it implements its new process. For example, regarding integrity and transparency, it is not clear whether any significant agreements reached among the federal agencies during interagency consultation meetings will be documented in the public record, since the new policy specifies only that written comments provided by other federal agencies will become part of the public record; and why comments from other federal agencies cannot be solicited at the same time the initial draft is sent to independent peer reviewers and public comments are solicited. This change would enhance transparency and would further reduce overall assessment time frames. Specifically, the public and peer reviewers could have greater assurance that the draft had not been inappropriately biased by policy considerations of other agencies, including those that may be affected by the outcome, such as the Department of Defense and the Department of Energy. In addition, the new assessment process states that “White House offices” will be involved in the interagency consultation process but does not indicate which offices. Given that (1) EPA will be performing the coordinating role that OMB exercised under the prior process and (2) the purpose of these consultations is to obtain scientific feedback, it is unclear whether OMB will continue to be involved in the interagency consultation process. Independent, expert peer review of EPA’s scientific and regulatory products, such as risk assessments and proposed rules, is integral to the agency’s ability to effectively protect public health and the environment. Specifically, using peer review, EPA seeks to enhance the quality and credibility of the agency’s highly specialized products. One of the several ways EPA obtains expert peer review is from advice and recommendations it requests of its 24 federal advisory committees comprising independent experts. For example, since its inception in 1978, one of EPA’s largest and most prominent federal advisory committees— the EPA Science Advisory Board—has convened hundreds of peer review panels to assess the scientific and technical rationales underlying a wide range of current or proposed EPA regulations and policies. The IRIS program uses Science Advisory Board panels to peer review some of its particularly complex chemical assessments, and the Board is currently expanding a panel that will review existing IRIS assessment values established more than 10 years ago. Federal advisory committees such as the Science Advisory Board are subject to the requirements of the Federal Advisory Committee Act (FACA), which include broad requirements for balance, independence, and transparency. To be effective, peer review panels must be—and also be perceived to be—free of any significant conflict of interest and uncompromised by bias. Peer review panels should also be properly balanced, allowing for a spectrum of views and appropriate expertise. These standards, reflected in the act, are important because the work of fully competent peer review panels can be undermined by allegations of conflict of interest and bias. In 2001, we reported on limitations in the policies and procedures developed by EPA’s Science Advisory Board to ensure that its panels’ peer reviewers are independent and that a balance of viewpoints is represented on each panel. These limitations could reduce the effectiveness of the Board overall by contributing to its being perceived as biased and could inadvertently expose some panelists to violations of federal conflict-of- interest laws. Demonstrating a strong commitment to the integrity of its peer reviews, EPA took a number of actions to implement our report’s recommendations, including establishing a standard process for Science Advisory Board panel formation that includes a requirement to document decisions about conflicts of interest and balance of viewpoints and expertise in forming each panel, as well as prospective panelists’ responses to several standardized questions aimed at assessing impartiality; developing a new confidential financial disclosure form designed to capture needed information to evaluate potential conflicts of interest; allowing the public to review a “short list” of candidates selected for a specific Science Advisory Board panel and to comment on the appropriateness of including any of these candidates on the panel; and developing CD-based conflict-of-interest training for Science Advisory Board panelists. In 2004, we reported on the policies and procedures at nine federal departments and agencies, including EPA, that extensively use federal advisory committees. We also identified practices that promote independence and balance used by the National Academies and the EPA Science Advisory Board. Regarding the latter issue, we concluded that the National Academies and the EPA Science Advisory Board have developed clear processes that, if effectively implemented, can provide these organizations with an assurance that relevant conflicts of interest are identified and addressed—and that committees are appropriately balanced in terms of points of view. Specifically, we found that the processes used by the National Academies and EPA’s Science Advisory Board clearly and consistently identify the information they deem necessary to assess candidates for independence and to balance committees, explain to the candidates why the required information is important to protect the integrity of the committee’s work, request public comment on proposed committee membership, and require evaluation of the overall balance of committees before committees are finalized. Regarding the federal advisory committee policies and procedures at nine departments and agencies, in 2004 we found that the Departments of Agriculture, Energy, and the Interior had a long-standing practice of appointing most or all members of their federal advisory committees as “representatives”—expected to reflect the views of the entity or group they are representing and not subject to conflict-of-interest reviews—even when the departments called upon the members to provide advice on behalf of the government on the basis of their best judgment and thus should have appointed them as special government employees. That is, members of federal advisory committees that are providing advice on behalf of the government should be appointed as “special government employees”—short-term or intermittent employees subject, with some important modifications, to the conflict-of-interest requirements applicable to other federal employees. We also reported that representative appointments are generally not appropriate for scientific and technical advisory committees, which typically provide advice on behalf of the government. We made recommendations to the two agencies responsible for overseeing aspects of federal advisory committees to, among other things, provide additional guidance to federal agencies on the appropriate use of representative appointments. In response, these agencies issued such guidance in 2004 and 2005. (See appendix I for additional information on our 2004 federal advisory committee recommendations.) The two scientific EPA federal advisory committees we assessed in our 2004 report appropriately appointed their members as special government employees. We note that 16 of the 24 EPA federal advisory committees currently use representative appointments, according to the government’s database of federal advisory committee information. While EPA may be appropriately seeking stakeholder advice from some of these advisory committees, a number of its committees focus on scientific and technical questions for which EPA is likely to be seeking advice on behalf of the government on the basis of committee members’ best judgment, rather than stakeholder advice. EPA’s scientific and technical committees using representative appointments include the National Advisory Committee for Acute Exposure Guideline Levels for Hazardous Substances, the Coastal Elevations and Sea Level Rise Advisory Committee, the Environmental Laboratory Advisory Board, and the Children’s Health Protection Advisory Committee. In reviewing information about EPA’s committees, we found that descriptions of the objectives and scope of committee activities for EPA committees using representative appointments are similar to such descriptions for EPA committees using special government employees, such as the Science Advisory Board; the Federal Insecticide, Fungicide, and Rodenticide Science Advisory Panel; the National Drinking Water Advisory Council; and the Human Studies Review Board. As EPA moves forward with actions to enhance its scientific integrity, it will be appropriate for the agency to review its federal advisory committee appointments, especially those for which it appoints members as representatives, to help ensure that committee work is not jeopardized by allegations of conflict of interest or bias. As discussed earlier, committee members appointed as representatives are not evaluated for potential conflicts of interest. If some EPA committee members are inappropriately appointed as representatives, EPA cannot be assured that any real or perceived conflicts of interest of their committee members who provided advice on behalf of the government were identified and appropriately mitigated. Further, allegations that the members had conflicts of interest could call into question the independence of the committee and jeopardize the credibility of the committee’s work. Advisory committee charters generally expire at the end of 2 years unless renewed by the agency or Congress. The EPA committees with representative members discussed earlier have charters expiring in 2009 and 2010. As it reviews its policies and procedures to ensure scientific integrity, EPA could either comprehensively review the appointments of its 16 committees with representative members or, alternatively, review them as the charters are renewed. We note that EPA has in-house expertise in managing federal advisory committees composed of special government employees—for example, the staff who administer and coordinate Science Advisory Board committees—and thus should be well positioned to address this issue. In conclusion, EPA’s most recent changes to the IRIS assessment process, if effectively implemented, would represent a significant improvement over the process put in place in 2008. Among other things, the reforms appropriately restore EPA’s control of the IRIS process and increase the transparency of the process. In addition, EPA was responsive to our 2001 recommendations for improving the independence and balance of committees convened by EPA’s Science Advisory Board by developing policies and procedures that represent best practices. As a result, if these policies and procedures are implemented effectively, EPA can have an assurance that its Science Advisory Board panels are independent and balanced as a whole. However, a number of EPA’s other federal advisory committees do not appear to have benefited from the steps the Science Advisory Board has taken to enhance the integrity and transparency of its committees. As EPA takes additional steps to comply with the President’s March 9, 2009, memorandum on scientific integrity, we believe that EPA’s scientific processes could be further enhanced by considering our questions about some aspects of the IRIS assessment process and reviewing its federal advisory committee appointments. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact John B. Stephenson at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Christine Fishkin (Assistant Director), Laura Gatz, Richard P. Johnson, Summer Lingard, Nancy Crothers, Antoinette Capaccio, and Carol Kolarik. Following are highlights of the recommendations in our 2004 report, Federal Advisory Committees: Additional Guidance Could Help Agencies Better Ensure Independence and Balance, to the General Services Administration (GSA) and the Office of Government Ethics (OGE). These agencies oversee aspects of federal advisory committees. Specifically, GSA develops guidance on establishing and managing Federal Advisory Committee Act (FACA) committees, and OGE develops regulations and guidance for statutory conflict-of-interest provisions that apply to special government employees. Our 2004 report contained recommendations to GSA and OGE to, among other things, provide additional guidance to federal agencies on the appropriate use of representative appointments. Specifically, we recommended that guidance from OGE to agencies be improved to better ensure that members appointed to committees as representatives were, in fact, representing a recognizable group or entity. OGE agreed that some agencies may have been inappropriately identifying certain advisory committee members as representatives instead of special government employees and issued guidance documents in July 2004 and August 2005 that clarified the distinction between special government employees and representative members. In particular, as we recommended, OGE’s clarifications included that (1) members should not be appointed as representatives purely on the basis of their expertise and (2) appointments as representatives are limited to circumstances in which the members are speaking as stakeholders for the entities for groups they represent. We also recommended that OGE and GSA modify their FACA training materials to incorporate the changes in guidance regarding the appointment process, which they have done. In addition, we recommended that GSA expand its FACA database to identify each committee member’s appointment category and, for representative members, the entity or group represented. GSA quickly implemented this recommendation and now has data on appointments beginning in 2005. Finally, we recommended that OGE and GSA direct agencies to review their appointments of representative and special government employee committee members to make sure they are appropriate. OGE’s 2004 and 2005 guidance documents addressed this issue by, among other things, recommending that agency ethics officials periodically review appointment designations to ensure they are proper. High-Risk Series, An Update. GAO-09-271. Washington, D.C.: January. 2009. EPA Science: New Assessment Process Further Limits the Credibility and Timeliness of EPA’s Assessments of Toxic Chemicals. GAO-08-1168T. Washington, D.C.: September 18, 2008. Environmental Health: EPA Efforts to Address Children’s Health Issues Need Greater Focus, Direction, and Top-Level Commitment. GAO-08-1155T. Washington, D.C.: September 16, 2008. Chemical Assessments: EPA’s New Assessment Process Will Further Limit the Productivity and Credibility of Its Integrated Risk Information System. GAO-08-810T. Washington, D.C.: May 21, 2008. Toxic Chemicals: EPA’s New Assessment Process Will Increase Challenges EPA Faces in Evaluating and Regulating Chemicals. GAO-08-743T. Washington, D.C.: April 29, 2008. Federal Advisory Committee Act: Issues Related to the Independence and Balance of Advisory Committees. GAO-08-611T. Washington, D.C.: April 2, 2008. Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA’s Integrated Risk Information System. GAO-08-440. Washington, D.C.: March 7, 2008. Federal Advisory Committees: Additional Guidance Could Help Agencies Better Ensure Independence and Balance. GAO-04-328. (Washington, D.C.: April 16, 2004. EPA’s Science Advisory Board Panels: Improved Policies and Procedures Needed to Ensure Independence and Balance. GAO-01-536. Washington, D.C.: June 12, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Environmental Protection Agency's (EPA) ability to effectively implement its mission of protecting public health and the environment relies largely on the integrity and transparency of (1) its assessments of the potential human health effects of exposure to chemicals and (2) its federal advisory committees, which are to provide independent, expert reviews of EPA's scientific work, among other functions. EPA's Integrated Risk Information System (IRIS) program is critical in developing the agency's scientific positions on the potential health effects of exposure to toxic chemicals. These positions, used as a basis for environmental risk management decisions by EPA and others, are maintained in IRIS' database of more than 540 chemical assessments. Since 2001, GAO has issued a number of reports addressing the importance of integrity and transparency to EPA's chemical assessments and to EPA's federal advisory committees. GAO work on EPA's advisory committees has focused on its Science Advisory Board--1 of 24 EPA federal advisory committees--which convenes panels to review many of the agency's scientific assessments and proposals. This testimony highlights scientific integrity and transparency issues GAO has reported on and relevant EPA reform efforts regarding (1) the IRIS assessment process and (2) federal advisory committee policies and procedures and appointment mechanisms. GAO has supplemented information from its prior reports with a preliminary review of the IRIS assessment process EPA issued on May 21, 2009, and the current appointment mechanisms for members of EPA's federal advisory committees. In March 2008, GAO reported that the database of chemicals assessed under the IRIS program was at serious risk of becoming obsolete because EPA had not been able to complete timely, transparent, and credible assessments or decrease its backlog of ongoing assessments. A revised IRIS assessment process EPA issued in April 2008 did not respond to GAO's recommendations; rather, it made changes likely to further exacerbate concerns GAO had identified. Largely as a result of EPA's lack of responsiveness, GAO added EPA's processes for assessing and controlling toxic chemicals as a high-risk area in its January 2009 biennial status report on governmentwide high-risk areas requiring increased attention by executive agencies and Congress. Taking positive action, EPA issued a new IRIS assessment process on May 21, 2009. In announcing these reforms, EPA echoed GAO's findings that the April 2008 assessment changes reduced the transparency, timeliness, and scientific integrity of the IRIS process. The IRIS reforms, if implemented effectively, will represent significant improvements. Among other things, they restore EPA's control of the process and increase its transparency. For example, under the prior process, interagency reviews were required and managed by the Office of Management and Budget (OMB) and EPA was not allowed to proceed with assessments at various stages until OMB notified EPA that it had sufficiently responded to comments from OMB and other agencies. In contrast, under the recently announced process, EPA is to manage the entire IRIS assessment process, including what are now called interagency consultations. In 2001, GAO reported on limitations in the policies and procedures developed by EPA's Science Advisory Board to ensure that its panels' peer reviewers are independent and that a balance of viewpoints is represented on each panel. These limitations could have reduced the effectiveness of the Board by contributing to its being perceived as biased and could have inadvertently exposed panelists to violations of federal conflict-of-interest laws. EPA revised the Board's policies and procedures, as GAO had recommended. In a broader 2004 report on federal advisory committees, GAO highlighted the Board's revised policies and procedures, and those of the National Academies, which can--if implemented effectively--provide an assurance that relevant conflicts of interest are identified and addressed and that the committees are balanced in terms of points of view. However, EPA currently appoints members to 16 of its federal advisory committees using an appointment mechanism reserved for cases in which members are to speak as representatives of identified entities and are not subject to conflict-of-interest reviews, rather than as individuals speaking on behalf of the government on the basis of their best judgment. While EPA may be appropriately seeking stakeholder advice from some of its advisory committees, a number of these committees focus on scientific and technical questions for which EPA is likely to be seeking advice on behalf of the government. As EPA works to enhance scientific integrity, a review of advisory committee appointments could help ensure that committee work is not jeopardized by allegations of conflicts of interest or bias. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
An improper payment is any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. This definition includes any payment to an ineligible recipient, any payment for an ineligible good or service, any duplicate payment, any payment for a good or service not received (except where authorized by law), and any payment that does not account for credit for applicable discounts. Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111- 204, § 2(e), 124 Stat. 2224, 2227 (2010) (codified at 31 U.S.C. § 3321 note). Office of Management and Budget guidance also instructs agencies to report as improper payments any payments for which insufficient or no documentation was found. the greatest financial risk to Medicare (see Table 1). However, the contractors have varying roles and levels of CMS direction and oversight in identifying claims for review. MACs process and pay claims and conduct prepayment and postpayment reviews for their established geographic regions. As of January, 2016, 12 MACs—referred to as A/B MACs—processed and reviewed Medicare Part A and Part B claims, and 4 MACs—referred to as DME MACs— processed and reviewed DME claims. MACs are responsible for identifying both high-risk providers and services for claim reviews, and CMS has generally given the MACs broad discretion to identify claims for review. Each individual MAC is responsible for developing a claim review strategy to target high-risk claims.20 In their role of processing and paying claims, the MACs also take action based on claim review findings. The MACs deny payment on claims when they or other contractors identify payment errors during prepayment claim reviews. When MACs or other claim review contractors identify overpayments using postpayment reviews, the MACs seek to recover the overpayment by sending providers what is referred to as a demand letter. In the event of underpayments, the MACs return the balance to the provider in a future reimbursement. For additional information on the MAC roles and responsibilities, see GAO, Medicare Administrative Contractors: CMS Should Consider Whether Alternative Approaches Could Enhance Contractor Performance, GAO-15-372 (Washington, D.C.: Apr. 2015). Congress established per beneficiary Medicare limits for therapy services, which took effect in 1999. However, Congress imposed temporary moratoria on the limits several times until 2006, when it required CMS to implement an exceptions process in which exceptions to the limits are allowed for reasonable and necessary therapy services. Starting in 2012, the exceptions process has applied a claim review requirement on claims after a beneficiary’s annual incurred expenses reach certain thresholds. For additional information on the therapy service limits, see GAO, Medicare Outpatient Therapy: Implementation of the 2012 Manual Medical Review Process, GAO-13-613 (Washington, D.C.: July, 2013). As required by law, the RAs are paid on a contingent basis from recovered overpayments. The contingency fees generally range from 9.0 percent to 17.5 percent, and vary by RA region, the type of service reviewed, and the way in which the provider remits the overpayment. Because the RAs are paid from recovered funds rather than appropriated funds, the use of RAs expands CMS’s capacity for claim reviews without placing additional demands on the agency’s budget. The RAs are allowed to target high-dollar claims that they believe have a high risk of improper payments, though they are not allowed to identify claims for review solely because they are high-dollar claims. The RAs are also subject to limits that only allow them to review a certain percentage or number of a given provider’s claims. The RAs initially identified high rates of error for short inpatient hospital stays and targeted those claims for review. Certain hospital services, particularly services that require short hospital stays, can be provided in both an inpatient and outpatient setting, though inpatient services generally have higher Medicare reimbursement amounts. The RAs found that many inpatient services should have been provided on an outpatient basis and denied many claims for having been rendered in a medically unnecessary setting.23 Medicare has a process that allows for the appeal of claim denials, and hospitals appealed many of the short inpatient stay claims denied by RAs. Hospital appeals of RA claim denials helped contribute to a significant backlog in the Medicare appeals system. determining whether RA prepayment reviews could prevent fraud and the resulting improper payments and, in turn, lower the FFS improper payment rate. From 2012 through 2014, operating under this waiver authority, CMS conducted the RA Prepayment Review Demonstration in 11 states. In these states, CMS directed the RAs to conduct prepayment claim reviews for specific inpatient hospital services. Additionally, the RAs conducted prepayment reviews of therapy claims that exceeded the annual per beneficiary limit in the 11 demonstration states. Under the demonstration, instead of being paid a contingency fee based on recovered overpayments, the RAs were paid contingency fees based on claim denial amounts. In anticipation of awarding new RA contracts, CMS began limiting the number of RA claim reviews and discontinued the RA Prepayment Review Demonstration in 2014. CMS required the RAs to stop sending requests for medical documentation to providers in February 2014, so that the RAs could complete all outstanding claim reviews by the end of their contracts. However, in June 2015, CMS cancelled the procurement for the next round of RA contracts, which had been delayed because of bid protests. Instead, CMS modified the existing RA contracts to allow the RAs to continue claim review activities through July 31, 2016. In November 2015, CMS issued new requests for proposals for the next round of RA contracts and, according to CMS officials, plans to award them in 2016. The SMRC conducts nationwide postpayment claim reviews as part of CMS-directed studies aimed at lowering improper payment rates. The SMRC studies often focus on issues related to specific services at high risk for improper payments, and provide CMS with information on the prevalence of the issues and recommendations on how to address them. Although CMS directs the types of services and improper payment issues that the SMRC examines, the SMRC identifies the specific claims that are reviewed as part of the studies. CMS’s CERT program annually estimates the amount and rate of improper payments in the Medicare FFS program, and CMS uses the CERT results, in part, to direct and oversee the work of claim review contractors, including the MACs, RAs, and SMRC. CMS’s CERT program develops its estimates by using a contractor to conduct postpayment claim reviews on a statistically valid random sample of claims. The CERT program develops the estimates as part of CMS’s efforts to comply with the Improper Payments Information Act, which requires agencies to annually identify programs susceptible to significant improper payments, estimate amounts improperly paid, and report these estimates and actions taken to reduce them.25 In addition, the CERT program estimates improper payment rates specific to Medicare service and provider types and identifies services that may be particularly at risk for improper payments. See Improper Payments Information Act of 2002 (IPIA), Pub. L. No. 107-300, 116 Stat. 2350 (2002) (codified, as amended, at 31 U.S.C. § 3321 note). The IPIA was subsequently amended by the Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111-204, 124 Stat. 2224 (2010), and the Improper Payments Elimination and Recovery Improvement Act of 2012, Pub. L. No. 112-248, 126 Stat. 2390 (2013). We have also reported that prepayment controls are generally more cost-effective than postpayment controls and help avoid costs associated with the “pay and chase” process. See GAO, A Framework for Managing Fraud Risks in Federal Programs, GAO-15-593SP (Washington, D.C.: July 28, 2015). CMS is not always able to collect overpayments identified through postpayment reviews. A 2013 HHS OIG study found that each year over the period from fiscal year 2007 to fiscal year 2010, approximately 6 to 9 percent of all overpayments identified by claim review contractors were deemed not collectible.27 Postpayment reviews require more administrative resources compared to prepayment reviews. Once overpayments are identified on a postpayment basis, CMS requires contractors to take timely efforts to collect the overpayments. HHS OIG reported that the process for recovering overpayments can involve creating and managing accounts receivables for the overpayments, tracking provider invoices and payments, and managing extended repayment plans for certain providers. In contrast, contractors do not need to take these steps, and expend the associated resources, for prepayment reviews, which deny claims before overpayments are made. Key stakeholders we interviewed identified few significant differences in conducting and responding to prepayment and postpayment reviews. Specifically, CMS, MAC, and RA officials stated that prepayment and postpayment review activities are generally conducted by claim review contractors in similar ways. Officials we interviewed from health care provider organizations told us that providers generally respond to prepayment and postpayment reviews similarly, as both types of review occur after a service has been rendered, and involve similar medical documentation requirements and appeal rights. These statistics are based on CMS summary financial data, and the currently not collectable classification for overpayments can vary based on when overpayments are identified and demanded, and if overpayments are under appeal. See Department of Health and Human Services, Office of Inspector General, Medicare’s Currently Not Collectible Overpayments, OEI-03-11-00670 (Washington, D.C.: June 2013). hold discussions with the RAs for postpayment review findings, and CMS recently implemented the option for SMRC findings as well. The discussions offer providers the opportunity to give additional information before payment determinations are made and before providers potentially enter the Medicare claims appeals process. Several of the provider organizations we interviewed found the RA discussions helpful, stating that some providers have been able to get RA overpayment determinations reversed. Such discussions are not available for RA prepayment claim reviews or for MAC reviews. CMS officials stated that the discussions are not feasible for prepayment claim reviews due to timing difficulties, as the MACs and RAs are required to make payment determinations within 30 days after receiving providers’ medical records. Second, providers stated that they may face certain cash flow burdens with prepayment claim reviews that they do not face with postpayment reviews due to how the claims are treated in the Medicare appeals process.29 When appealing postpayment review overpayment determinations, providers keep their Medicare payment through the first two levels of appeal before CMS recovers the identified overpayment. If the overpayment determinations are overturned at a higher appeal level, CMS must pay back the recovered amount with interest accrued for the period in which the amount was recouped. In contrast, providers do not receive payment for claims denied on a prepayment basis and, if prepayment denials are overturned on appeal, providers do not receive interest on the payments for the duration the payments were held by CMS. The Medicare FFS appeals process consists of five levels of review that include CMS contractors, staff divisions within HHS, and ultimately, the federal judicial system, allowing appellants who are dissatisfied with the decision at one level to appeal to the next level. claims deemed most critical by each MAC to address and a description of plans to address them. During the same time period, the MACs conducted approximately 76,000 postpayment claim reviews, though some MACs did not conduct any postpayment claims reviews. Prior to the establishment of the national RA program, the MACs conducted a greater proportion of postpayment reviews. However, the MACs have shifted nearly all of their focus to conducting prepayment reviews, as responsibility for conducting postpayment reviews has generally shifted to the RAs. According to CMS officials, the MACs currently use postpayment reviews to analyze billing patterns to inform other review activities, including future prepayment reviews, and to help determine where to conduct educational outreach for specific providers. CMS has also encouraged the MACs to use postpayment reviews to perform extrapolation, a process in which the MACs estimate an overpayment amount for a large number of claims based on a sample of claim reviews. According to CMS officials, extrapolation is not used often but is an effective strategy for providers that submit large volumes of low-dollar claims with high improper payment rates. The SMRC is focused on examining Medicare billing and payment issues at the direction of CMS, and all of its approximately 178,000 reviews in 2013 and 2014 were postpayment reviews. The SMRC uses postpayment reviews because its studies involve developing sampling methodologies to examine issues with specific services or specific providers. For example, in 2013, CMS directed the SMRC to complete a national review of home health agencies, which involved reviewing five claims from every home health agency in the country. CMS had the SMRC conduct this study to examine issues arising from a new coverage requirement that raised the improper payment rate for home health services.30 Additionally, a number of SMRC studies used postpayment sampling to perform extrapolation to determine overpayment amounts for certain providers. The RAs generally conducted postpayment reviews, though they conducted prepayment reviews under the Prepayment Review Demonstration. The RAs conducted approximately 85 percent of their claim reviews on a postpayment basis in 2013 and 2014—accounting for approximately 1.7 million postpayment claim reviews—with the other 15 percent being prepayment reviews conducted under the demonstration. CMS is no longer using the RAs to conduct prepayment reviews because the demonstration ended. Outside of a demonstration, CMS must pay the RAs from recovered overpayments, which effectively limits the RAs to postpayment reviews. CMS and RA officials who we interviewed generally considered the demonstration a success, and CMS officials told us that they included prepayment reviews as a potential work activity in the requests for proposals for the next round of RA contracts, in the event that the agency is given the authority to pay RAs on a different basis. However, the President’s fiscal year budget proposals for 2015 through 2017 did not contain any legislative proposals that CMS be provided such authority. Obtaining the authority to allow the RAs to conduct prepayment reviews would align with CMS’s strategy to pay claims properly the first time. In not seeking the authority, CMS may be missing an opportunity to reduce the amount of uncollectable overpayments from RA reviews and save administrative resources associated with recovering overpayments. The rate of improper payments for home health services rose from 6.1 percent in fiscal year 2012 to 17.3 percent in fiscal year 2013and to 51.4 percent in fiscal year 2014. According to CMS, the increase in improper payments occurred primarily because of CMS’s implementation of a requirement that home health agencies have documentation showing that referring providers conducted a face-to-face examination of beneficiaries before certifying them as eligible for home health services. Our analysis of RA claim review data shows that the RAs focused on reviewing inpatient claims in 2013 and 2014, though this focus was not consistent with the degree to which inpatient services constituted improper payments, or with CMS’s expectation that the RAs review all claim types. In 2013, a significant majority—78 percent—of all RA claim reviews were for inpatient claims, and in 2014, nearly half—47 percent— of all RA claim reviews were for inpatient claims (see Table 3). For RA postpayment reviews specifically, which excludes reviews conducted as part of the RA Prepayment Review Demonstration, 87 percent of RA reviews were for inpatient claims in 2013, and 64 percent were for inpatient claims in 2014. Inpatient services had high amounts of improper payments relative to other types of services—with over $8 billion in improper payments in fiscal year 2012 and over $10 billion in fiscal year 2013—which reflect the costs of providing these services. However, inpatient services did not have a high improper payment rate relative to other services and constituted about 30 percent of overall Medicare FFS improper payments in both years. As will be discussed, the proportion of inpatient reviews in 2014 would likely have been higher if CMS—first under its own authority and then as required by law—had not prohibited the RAs from conducting reviews of claims for short inpatient hospital stays at the beginning of fiscal year 2014. The RAs conducted about 1 million fewer claim reviews in 2014 compared to 2013, and nearly all of the decrease can be attributed to fewer reviews of inpatient claims. In general, the RAs have discretion to select the claims they review, and their focus on reviewing inpatient claims is consistent with the financial incentives associated with the contingency fees they receive, as inpatient claims generally have higher payment amounts compared to other claim types. By law, RAs receive a portion of the recovered overpayments they identify, and RA officials told us that they generally focus their claim reviews on audit issues that have the greatest potential returns. Our analysis found that RA claim reviews for inpatient services had higher average identified improper payment amounts per postpayment claim review relative to other claim types in 2013 and 2014 (see Table 4). For example, in 2013, the RAs identified about 10 times the amount per postpayment claim review for inpatient claims compared to claim reviews for physicians. Although CMS expects the RAs to review all claim types, CMS’s oversight of the RAs did not ensure that the RAs distributed their reviews across claim types in 2013 and 2014. According to CMS officials, the agency’s approval of RA audit issues is the primary way in which CMS controls the type of claims that the RAs review. However, the officials said they generally focus on the appropriateness of the review methodology when determining whether to approve the audit issues, instead of on whether the RA’s claim review strategy encompasses all claim types. The RAs generally determine the types of audit issues that they present to CMS for approval, and based on our analysis of RA audit issues data, we found that from the inception of the RA program to May 2015, 80 percent of the audit issues approved by CMS were for inpatient claims. Additionally, CMS generally gives RAs discretion regarding the claims that they select for review among approved audit issues. Effective October 1, 2013, CMS changed the coverage requirements for short inpatient hospital stays. As a result, CMS prohibited RA claim reviews related to the appropriateness of inpatient admissions for claims with dates of admission between October 1, 2013 and September 30, 2014. In April 2014 and April 2015, Congress enacted legislation directing CMS to continue the prohibition of RA claim reviews related to the appropriateness of inpatient admissions for claims with dates of admission through September 30, 2015, unless there was evidence of fraud and abuse. Protecting Access to Medicare Act of 2014, Pub. L. No. 113-93, § 111, 128 Stat.1040, 1044 (2014); Medicare Access and CHIP Reauthorization Act of 2015, Pub. L. No. 114-10, § 521, 129 Stat. 87, 176 (2015). In July 2015, CMS announced that it would not allow such RA claim reviews for claims with dates of admission of October 1, 2015 through December 31, 2015. The RAs were allowed to continue reviews of short stay inpatient claims for reasons other than reviewing inpatient status, such as reviews related to coding requirements. Beginning on October 1, 2015, Quality Improvement Organizations assumed responsibility for conducting initial claim reviews related to the appropriateness of inpatient hospital admissions. Starting January 1, 2016, the Quality Improvement Organizations will refer providers exhibiting persistent noncompliance with Medicare policies to the RAs for potential further review. CMS stated that it will monitor the extent to which the RAs are reviewing all claim types, may impose a minimum percentage of reviews by claim type, and may take corrective action against RAs that do not review all claim types. CMS has also taken steps to provide incentives for the RAs to review other types of claims. To encourage the RAs to review DME claims— which had the highest rates of improper payments in fiscal years 2012 and 2013—CMS officials stated that they increased the contingency fee percentage paid to the RAs for DME claims. Further, in the requests for proposals for the next round of RA contracts, CMS included a request for a national RA that will specifically review DME, home health agency, and hospice claims. CMS officials told us that they are procuring this new RA because the existing four regional RAs reviewed a relatively small number of these types of claims. Although DME, home health agency, and hospice claims combined represented more than 25 percent of improper payments in both 2013 and 2014, they constituted 5 percent of RA reviews in 2013 and 6 percent of reviews in 2014. In 2013 and 2014, the MACs focused their claim reviews on physician and DME claims. Physician claims accounted for 49 percent of MAC claim reviews in 2013 and 55 percent of reviews in 2014, while representing 30 percent of improper payments in fiscal year 2012 and 26 percent in fiscal year 2013 (see Table 5). DME claims accounted for 29 percent of their reviews in 2013 and 26 percent in 2014, while representing 22 percent of total improper payments in fiscal year 2013 and 16 percent of improper payments in fiscal year 2014. DME claims also had the highest rates of improper payments in both years. According to CMS officials, the MACs focused their claim reviews on physician claims—a category which encompasses a large variety of provider types, including labs, ambulances, and individual physician offices—because they constitute a significant majority of all Medicare claims. CMS officials also told us that they direct MAC claim review resources to DME claims in particular because of their high improper payment rate. Further CMS officials told us that the MACs’ focus on reviewing physician and DME claims was in part due to how CMS structures the MAC claim review workload. CMS official noted that each A/B MAC is responsible for addressing improper payments for both Medicare Part A and Part B, and MAC Part B claim reviews largely focus on physician claims. Additionally, 4 of the 16 MACs are DME MACs that focus their reviews solely on DME claims. CMS officials also noted that MAC reviews of inpatient claims were likely lowered during this period because of CMS’s implementation of new coverage policies for inpatient admissions. Similar to the RAs, the MACs were limited in conducting reviews for short inpatient hospital stays after October 1, 2013. The focus of the SMRC’s claim reviews depended on the studies that CMS directed the contractor to conduct in 2013 and 2014. In 2013, the SMRC focused its claim reviews on outpatient and physician claims, with physician claims accounting for half of all SMRC reviews (see Table 6). Physician claims accounted for 30 percent—the largest percentage—of the total amount of estimated improper payments in fiscal year 2012. In 2014, the SMRC focused 46 percent of its reviews on home health agency claims and 44 percent of its claim reviews on DME claims, which had the two highest improper payment rates in fiscal year 2013. CMS generally directs the SMRC to conduct studies examining specific services, and the number of claims reviewed by claim type is highly dependent on the methodologies of the studies. For example, one SMRC study involved reviewing nearly 50,000 DME claims for suppliers deemed high risk for having improperly billed for diabetic test strips. In 2014, the claim reviews for this study accounted for all of the SMRC’s DME claim reviews and nearly half of all the SMRC claim reviews. Additionally, in 2014, the SMRC reviewed more than 50,000 claims as part of its study that examined five claims from every home health agency. The study followed a significant increase in the improper payment rate for home health agencies from 2012 to 2013, from 6 percent to 17 percent. In some cases, SMRC studies focused on specific providers. For example, a 2013 SMRC study reviewed claims for a single hospital to follow up on billing issues previously identified by the HHS OIG. The RAs were paid an average of $158 per claim review conducted in 2013 and 2014 and identified $14 in improper payments, on average, per dollar paid by CMS in contingency fees (see Table 7). The cost to CMS in RA contingency fees per review decreased from $178 in 2013 to $101 in 2014 because the average identified improper payment amount per review decreased from $2,549 to $1,509. The decrease in the average identified improper payment amount per review likely resulted from the RAs conducting proportionately fewer reviews of inpatient claims in 2014 compared to 2013. The SMRC was paid an average of $256 per claim review conducted in studies initiated in fiscal years 2013 and 2014, though the amount paid per claim review varied by study and varied between years (see Table 8). In particular, the amount paid to the SMRC is significantly higher for studies that involve extrapolation for providers who had their claims reviewed as part of the studies and were found to have a high error rate. Based on our analysis, the higher average amount paid per review in 2014—$346 compared to $110 in 2013—can in part be attributed to the SMRC conducting proportionally more studies involving extrapolation in 2014. As well as increasing study costs, the use of extrapolation can significantly increase the associated amounts of identified improper payments per study. For example, the SMRC study on diabetic test strips involved extrapolation and included reviews of nearly 50,000 claims from 500 providers. It cost CMS more than $23 million to complete, but the SMRC identified more than $63 million in extrapolated improper payments. According to CMS officials, the agency has the SMRC perform extrapolation as part of its studies when it is cost effective—that is, when anticipated extrapolated overpayment amounts are greater than the costs associated with having the SMRC conduct the extrapolations. The amount the SMRC was paid per review also varied based on the type of service being reviewed and the number of reviews conducted. CMS pays the SMRC more for claim reviews for Part A services, such as inpatient and home health claims, than for claim reviews for Part B services, such as physician and DME claims, because CMS officials said that claim reviews of Part A services are generally more resource- intensive. Additionally, CMS gets a volume discount on SMRC claim reviews, with the cost per review decreasing once the SMRC reaches certain thresholds for the number of claim reviews in a given year. The SMRC identified $7 in improper payments per dollar paid by the agency, on average, in 2013 and 2014, though the average amount varied considerably by study and varied for 2013 and 2014. In 2013, the SMRC averaged $25 in improper payments per dollar paid, while in 2014, it averaged $4. The larger figure for 2013 is primarily attributed to two SMRC studies that involved claim reviews of inpatient claims that identified more than $160 million in improper payments but cost CMS less than $1 million in total to conduct. We were unable to determine the cost per review and the amount of improper payments identified by the MACs per dollar paid by CMS because the agency does not have reliable data on funding of MAC claim reviews for 2013 and 2014, and the agency collects inconsistent data on the savings from prepayment claim denials. For an agency to achieve its objectives, federal internal control standards provide that an agency must obtain relevant data to evaluate performance towards achieving agency goals.38 By not collecting reliable data on claim review funding and by not having consistent data on identified improper payments, CMS does not have the information it needs to evaluate MAC cost effectiveness and performance in protecting Medicare funds. GAO/AIMD-00-21.3.1. higher-level, broader contractual work activities. CMS officials told us that they have not required the MACs to report data on specific funds spent to conduct prepayment and postpayment claim reviews. However, as of February 2016, CMS officials told us that all MACs are either currently reporting specific data on prepayment and postpayment claim review costs or planning to do so soon. We also found that data on savings from MAC prepayment reviews were not consistent across the MACs. In particular, the MACs use different methods to calculate and report savings associated with prepayment claim denials, which represented about 98 percent of MAC claim review activity in 2013 and 2014. According to CMS and MAC officials, claims that are denied on a prepayment basis are never fully processed, and the Medicare payment amounts associated with the claims are never calculated. In the absence of processed payment amounts, the MACs use different methods for calculating prepayment savings. According to the MACs: Two MACs use the amount that providers bill to Medicare to calculate savings from prepayment claim denials. However, the amount that providers bill to Medicare is often significantly higher than and not necessarily related to how much Medicare pays for particular services. One MAC estimated that billed amounts can be, on average, three to four times higher than allowable amounts. Accordingly, calculated savings based on provider billed amounts can greatly inflate the estimated amount that Medicare saves from claim denials. Nine MACs calculate prepayment savings by using the Medicare “allowed amount.” The allowed amount is the total amount that providers are paid for claims for particular services, though it is generally marginally higher than the amount that Medicare pays, as it includes the amount Medicare pays, cost sharing that beneficiaries are responsible for paying, and amounts that third parties are responsible for paying. Additionally, the allowed amounts may not account for Medicare payment policies that may reduce provider payments, such as bundled payments. Five MACs compare denied claims with similar claims that were paid to estimate what Medicare would have paid. CMS has not provided the MACs with documented guidance or other instructions for how to calculate savings from prepayment reviews. Federal internal controls standards provide that an agency must document guidance that has a significant impact on the agency’s ability to achieve its goals. In reviewing MAC claim review program documentation, including the Medicare Program Integrity Manual and MAC contract statements of work, we were unable to identify any instructions on how the MACs should calculate savings from prepayment claim denials. Further, several MACs we interviewed indicated that they have not been provided guidance for calculating savings from prepayment denials. CMS officials told us that they were under the impression that all of the MACs were reporting prepayment savings data based on the amount that providers bill to Medicare, which can significantly overestimate the amount that Medicare saves from prepayment claim denials. Because CMS has not provided documented guidance on how to calculate savings from prepayment claim review, the agency lacks consistent and reliable information on the performance of MAC claim reviews. In particular, CMS does not have reliable information on the extent to which MAC claim reviews protect Medicare funds or on how the MACs’ performance compares to other contractors conducting similar activities. CMS contracts with claim review contractors that use varying degrees of prepayment and postpayment reviews to identify improper payments and protect the integrity of the Medicare program. Though we found few differences in how contractors conduct and how providers respond to the two review types, prepayment reviews are generally more cost-effective because they prevent improper payments and limit the need to recover overpayments through the “pay and chase” process, which requires administrative resources and is not always successful. Although CMS considered the Prepayment Review Demonstration a success, and having the RAs conduct prepayment reviews would align with CMS’s strategy to pay claims properly the first time, the agency has not requested legislative authority to allow the RAs to do so. Accordingly, CMS may be missing an opportunity to better protect Medicare funds and agency resources. Inconsistent with federal internal control standards, CMS has not provided the MACs with documented guidance or other instructions for how to calculate savings from prepayment reviews. As a result, CMS does not have reliable data on the amount of improper payments identified by the MACs, which limits CMS’s ability to evaluate MAC performance in preventing improper payments. CMS uses claim review contractors that have different roles and take different approaches to preventing improper payments. However, the essential task of reviewing claims is similar across the different contractors and, without better data, CMS is not in a position to evaluate the performance and cost effectiveness of these different approaches. We recommend that the Secretary of HHS direct the Acting Administrator of CMS to take the following two actions: In order to better ensure proper Medicare payments and protect Medicare funds, CMS should seek legislative authority to allow the RAs to conduct prepayment claim reviews. In order to ensure that CMS has the information it needs to evaluate MAC effectiveness in preventing improper payments and to evaluate and compare contractor performance across its Medicare claim review program, CMS should provide the MACs with written guidance on how to accurately calculate and report savings from prepayment claim reviews. We provided a copy of a draft of this report to HHS for review and comment. HHS provided written comments, which are reprinted in appendix I. In its comments, HHS disagreed with our first recommendation, but it concurred with our second recommendation. HHS also provided us with technical comments, which we incorporated in the report as appropriate. HHS disagreed with our first recommendation that CMS seek legislative authority to allow the RAs to conduct prepayment claim reviews. HHS noted that other claim review contractors conduct prepayment reviews and CMS has implemented other programs as part of its strategy to move away from the “pay and chase” process of recovering overpayments, such as prior authorization initiatives and enhanced provider enrollment screening. However, we found that prepayment reviews better protect agency funds compared with postpayment reviews, and believe that seeking the authority to allow the RAs to conduct prepayment reviews is consistent with CMS’s strategy. HHS concurred with our second recommendation that CMS provide the MACs with written guidance on how to accurately calculate and report savings from prepayment claim reviews. HHS stated that it will develop a uniform method to calculate savings from prepayment claim reviews and issue guidance to the MACs. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional requesters, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. Kathleen M. King, (202) 512-7114, [email protected]. In addition to the contact named above, Lori Achman, Assistant Director; Michael Erhardt; Krister Friday; Richard Lipinski; Kate Tussey; and Jennifer Whitworth made key contributions to this report. | CMS uses several types of claim review contractors to help reduce improper payments and protect the integrity of the Medicare program. CMS pays its contractors differently—the agency is required by law to pay RAs contingency fees from recovered overpayments, while other contractors are paid based on cost. Questions have been raised about the focus of RA reviews because of the incentives associated with the contingency fees. GAO was asked to examine the review activities of the different Medicare claim review contractors. This report examines (1) differences between prepayment and postpayment reviews and the extent to which contractors use them; (2) the extent to which the claim review contractors focus their reviews on different types of claims; and (3) CMS's cost per review and amount of improper payments identified by the claim review contractors per dollar paid by CMS. GAO reviewed CMS documents; analyzed CMS and contractor claim review and funding data for 2013 and 2014; interviewed CMS officials, claim review contractors, and health care provider organizations; and assessed CMS's oversight against federal internal control standards. The Centers for Medicare & Medicaid Services (CMS) uses different types of contractors to conduct prepayment and postpayment reviews of Medicare fee-for-service claims at high risk for improper payments. Medicare Administrative Contractors (MAC) conduct prepayment and postpayment reviews; Recovery Auditors (RA) generally conduct postpayment reviews; and the Supplemental Medical Review Contractor (SMRC) conducts postpayment reviews as part of studies directed by CMS. CMS, its contractors, and provider organizations identified few significant differences between conducting and responding to prepayment and postpayment reviews. Using prepayment reviews to deny improper claims and prevent overpayments is consistent with CMS's goal to pay claims correctly the first time and can better protect Medicare funds because not all overpayments can be collected. In 2013 and 2014, 98 percent of MAC claim reviews were prepayment, and 85 percent of RA claim reviews and 100 percent of SMRC reviews were postpayment. Because CMS is required by law to pay RAs contingency fees from recovered overpayments, the RAs can only conduct prepayment reviews under a demonstration. From 2012 through 2014, CMS conducted a demonstration in which the RAs conducted prepayment reviews and were paid contingency fees based on claim denial amounts. CMS officials considered the demonstration a success. However, CMS has not requested legislation that would allow for RA prepayment reviews by amending existing payment requirements and thus may be missing an opportunity to better protect Medicare funds. The contractors focused their reviews on different types of claims. In 2013 and 2014, the RAs focused their reviews on inpatient claims, which represented about 30 percent of Medicare improper payments. In 2013 and 2014, inpatient claim reviews accounted for 78 and 47 percent, respectively, of all RA claim reviews. Inpatient claims had high average identified improper payment amounts, reflecting the costs of the services. The RAs' focus on inpatient claims was consistent with the financial incentives from their contingency fees, which are based on the amount of identified overpayments, but the focus was not consistent with CMS's expectations that RAs review all claim types. CMS has since taken steps to limit the RAs' focus on inpatient claims and broaden the types of claims being reviewed. The MACs focused their reviews on physician and durable medical equipment claims, the latter of which had the highest rate of improper payments. The focus of the SMRC's claim reviews varied. In 2013 and 2014, the RAs had an average cost per review to CMS of $158 and identified $14 in improper payments per dollar paid by CMS to the RAs. The SMRC had an average cost per review of $256 and identified $7 in improper payments per dollar paid by CMS. GAO was unable to determine the cost per review and amount of improper payments identified by the MACs per dollar paid by CMS because of unreliable data on costs and claim review savings. Inconsistent with federal internal control standards, CMS has not provided written guidance on how the MACs should calculate savings from prepayment reviews. Without reliable savings data, CMS does not have the information it needs to evaluate the MACs' performance and cost effectiveness in preventing improper payments, and CMS cannot compare performance across contractors. GAO recommends that CMS (1) request legislation to allow the RAs to conduct prepayment claim reviews, and (2) provide written guidance on calculating savings from prepayment reviews. The Department of Health and Human Services disagreed with the first recommendation, but concurred with the second. GAO continues to believe the first recommendation is valid as discussed in the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
For decades, Colombia was one of Latin America’s more stable democracies and successful economies. However, by the late 1990s it had entered a period of sustained crisis due to the emerging strength of the FARC, the Army of National Liberation (ELN), and paramilitary groups (primarily, the United Self Defense Forces of Colombia or AUC) who were increasingly financing their activities through profits from illicit narcotics. These groups were assuming increasing control of the coca and opium poppy growing areas of the country through wide scale violence and human rights abuses, which affected to varying degrees each of Colombia’s 32 departments (see fig. 1). Colombia suffered a severe economic downturn in the late 1990s as its armed forces and police were unable to respond to the growing strength of these illegal armed groups, and levels of murder, kidnapping, extortion, economic sabotage, and illicit drug trafficking spiraled upward. According to State, in the 7 years prior to Plan Colombia, coca cultivation had increased by over 300 percent and opium poppy cultivation had increased by 75 percent. Despite U.S. and Colombian efforts to counter the drug-trafficking activities of these illegal armed groups, State reports that Colombia remains the source for about 90 percent of the cocaine entering the United States, and the primary source of heroin east of the Mississippi River. According to State officials, FARC and other illegal groups remain active in areas where coca and opium poppy are grown and are involved in every facet of the narcotics business from cultivation to transporting drugs to points outside Colombia. Announced by Colombian President Andres Pastrana in 1999, Plan Colombia was designed to counter the country’s drug and security crisis through a comprehensive 6-year, $7.5 billion plan linked to three objectives: (1) reduce the flow of illicit narcotics and improve security, (2) promote social and economic justice, and (3) promote the rule of law. While the latter two objectives were not specifically designed to reduce the flow of illicit narcotics and improve security, they broadly facilitate these goals by addressing some of the underlying social and economic realities that drive individuals toward the illicit drug trade and by providing a legal framework for bringing drug traffickers and terrorists to justice. As shown in figure 2, State and Defense assistance for the Colombian military and National Police has supported a counternarcotics strategy focused on reducing illicit narcotics and improving security. Central to this support have been State-led efforts to provide the Colombians with air mobility, which supports the full range of military programs and many nonmilitary programs by providing access and security in remote areas. Nonmilitary assistance efforts are implemented by USAID, Justice, and, State, which oversee a diverse range of social, economic, and justice initiatives. In January 2007, the government of Colombia announced a 6-year follow- on strategy, the PCCP. This new strategy includes the same three broad objectives as Plan Colombia. The government of Colombia has pledged to provide approximately $44 billion for PCCP. The strategy notes that a certain level of support from the international community is still essential. At the time, the United States developed a proposed funding plan of approximately $4 billion in U.S. support for PCCP for fiscal years 2007 through 2013. The government of Colombia significantly expanded the security component of Plan Colombia with its Democratic Security and Defense Policy in June 2003, which outlined a “clear, hold, and consolidate” strategy. The strategy’s main objective was to assert state control over the majority of Colombia’s national territory, particularly in areas affected by the activities of illegal armed groups and drug traffickers. Colombian officials said this new strategy will take years to fully implement. (See fig. 3.) Expanded authority approved by the U.S. Congress at about the same time allowed agencies to support this security strategy. The government of Colombia has taken a number of steps to implement this strategy, including: Increasing the size of its military and police from 279,000 in 2000 to 415,000 in 2007. Conducting a series of offensive actions against FARC under a military strategy called Plan Patriota, which began in June 2003 with efforts to clear FARC from areas surrounding Colombia’s capital, Bogotá. In mid- 2004, the military implemented a second, more ambitious phase of Plan Patriota aimed at attacking key FARC fronts encompassing the southern Colombian departments of Caquetá, Guaviare, and Meta. Based in Larandia, Joint Task Force-Omega was established in 2004 to coordinate the efforts of the Colombian Army, Air Force, and Marines in this area. Creating the Coordination Center for Integrated Government Action (CCAI) in 2004 to coordinate the delivery of military and civilian assistance in 58 targeted municipalities emerging from conflict in 11 regions throughout Colombia. An updated version of the Colombian defense strategy was released in coordination with the PCCP strategy in January 2007. Incorporating lessons learned from the 2003 strategy, this latest strategy focuses on clearing one region at a time and places a greater emphasis on consolidating military gains through coordinated civil-military assistance designed to solidify the government’s presence in previously conflictive areas by providing a range of government services to local populations. To implement this strategy, the government of Colombia has taken several actions, including focusing Joint Task Force-Omega’s efforts in La Macarena—a traditional FARC stronghold—through a new military offensive called Plan Consolidacíon. The government also developed a coordinated military and civilian plan of action called the Consolidation Plan for La Macarena, which has been in place since October 2007. As part of this plan, CCAI established a joint civil-military fusion center to coordinate military, police, economic development, and judicial activities. If successful, the approach in La Macarena is intended to serve as a model for similar CCAI efforts in 10 other regions of the country. It represents a key test of the government’s enhanced state presence strategy and a potential indicator of the long-term prospects for reducing Colombia’s drug trade by systematically re-establishing government control throughout the country. Between fiscal years 2000 and 2008, the United States has provided over $6 billion in military and nonmilitary assistance to Colombia. (See table 1.) Most State assistance for Colombia is overseen by its Bureau for International Narcotics and Law Enforcement Affairs (State/INL), though the Bureau for Political and Military Affairs is responsible for FMF and IMET funds. State/INL’s Narcotics Affairs Section (NAS) in the U.S. Embassy Bogotá oversees daily program operations. State’s Office of Aviation supports the NAS with advisors and contract personnel who are involved with the implementation of U.S. assistance provided to the Colombian Army’s Plan Colombia Helicopter Program (PCHP) and the National Police’s Aerial Eradication Program. The Military Group in the U.S. Embassy Bogotá manages both Defense counternarcotics support and State FMF and IMET funding. USAID and Justice have full-time staff based in Bogotá to oversee and manage their nonmilitary assistance programs. U.S. agencies are supported in their efforts by an extensive U.S.-funded contract workforce, which provides a range of services from aviation program support to alternative development project implementation. From the outset of Plan Colombia, Congress has stated that U.S. assistance efforts should be nationalized over time and has followed through with a number of specific reporting requirements and budget decisions to help ensure this objective is achieved. Beginning in 2004, Congress signaled that U.S. program administrators should begin the process of drawing down support for U.S. financed aviation programs in Colombia, which it noted accounted for a significant portion of U.S. assistance to Colombia. In 2005, House appropriators requested that the administration develop a multiyear strategy defining U.S. program and nationalization plans going forward under the PCCP. The administration responded in March 2006 with a report to Congress that outlined program achievements under Plan Colombia and a broad outline of planned nationalization efforts beginning with U.S. financed aviation programs. Follow-on reports issued in April 2007 and April 2008 further refined the administration’s plans by providing a proposed funding plan illustrating how U.S. assistance efforts would be reduced from 2007 through 2013 as the Colombians assume greater responsibility for programs funded and managed by the United States. Plan Colombia’s goal of reducing the cultivation, processing, and distribution of illegal narcotics by targeting coca cultivation was not achieved. Although estimated opium poppy cultivation and heroin production were reduced by about 50 percent, coca cultivation and cocaine production increased, though data from 2007 indicate that cocaine production slightly declined. Colombia’s security climate has improved as a result of progress in a number of areas, but U.S. and Colombian officials cautioned that current programs must be maintained for several years before security gains can be considered irreversible. From 2000 to 2006, estimated opium poppy cultivation and heroin production declined about 50 percent, but coca cultivation and cocaine production increased over the period. To put Colombia’s 6-year drug reduction goal in perspective, we note that although U.S. funding for Plan Colombia was approved in July 2000, many U.S.-supported programs to increase the Colombian military and police capacity to eradicate drug crops and disrupt the production and distribution of heroin and cocaine did not become operational until 2001 and later. Meanwhile, estimated illicit drug cultivation and production in Colombia continued to rise through 2001, with estimated cultivation and production declining in 2002 through 2004. However, the declines for coca cultivation and cocaine production were not sustained. In addition, the estimated flow of cocaine towards the United States from South America rose over the period. As illustrated in figure 4, estimated opium poppy cultivation and heroin production levels in 2006 were about half of what they had been in 2000. As illustrated in figure 5, coca cultivation was about 15 percent greater in 2006 than in 2000, with an estimated 157,000 hectares cultivated in 2006 compared to 136,200 hectares in 2000. State officials noted that extensive aerial and manual eradication efforts during this period were not sufficient to overcome countermeasures taken by coca farmers as discussed later in this report. U.S. officials also noted the increase in estimated coca cultivation levels from 2005 through 2007, may have been due, at least in part, to the Crime and Narcotics Centers’ decision to increase the size of the coca cultivation survey areas in Colombia beginning in 2004 and subsequent years. As illustrated in figure 6, estimated cocaine production was about 4 percent greater in 2006 than in 2000, with 550 metric tons produced in 2006 compared to 530 metric tons in 2000. However, in September 2008, ONDCP officials noted that cocaine production did not keep pace with rising coca cultivation levels because eradication efforts had degraded coca fields so less cocaine was being produced per hectare of cultivated coca. ONDCP also announced that estimated cocaine production rates in Colombia for 2003 through 2007 had been revised downward based on the results of recent research showing diminished coca field yield rates. On the basis of these revised estimates, ONDCP estimated cocaine production decreased by almost 25 percent from a high of 700 metric tons in 2001 to 535 metric tons in 2007. As illustrated in figure 7, in 2000, the interagency counternarcotics community estimated that 460 metric tons of cocaine was flowing towards the United States from South America. In 2004, the interagency began reporting low and high ranges of estimated flow. Using the midpoints of these ranges, the estimated flow of cocaine to the United States in 2004 was about 500 metric tons; in 2005 it rose to over 625 metric tons; in 2006 and 2007, it was about 620 metric tons. Reductions in Colombia’s estimated cocaine production have been largely offset by increases in cocaine production in Peru and to a lesser extent Bolivia. Although U.S. government estimates suggest that South American cocaine production levels have fluctuated since 2000, production in 2007 was 12 percent higher than in 2000. See appendix III for more detail about the interagency counternarcotics community’s estimates of coca cultivation and cocaine production in Colombia, Bolivia, and Peru. Since 2000, U.S. assistance has enabled the Colombians to achieve significant security advances in two key areas. First, the government has expanded its presence throughout the country, particularly in many areas formerly dominated by illegal armed groups. Second, the government, through its counternarcotics strategy, military and police actions, and other efforts (such as its demobilization and deserter programs) has degraded the finances of illegal armed groups and weakened their operational capabilities. These advances have contributed to an improved security environment as shown by key indicators (see figs. 8 through 10) reported by the government of Colombia. One central tenet of Plan Colombia and follow-on security plans is that Colombian government must reassert and consolidate its control in contested areas dominated or partially controlled by illegal armed groups. According to an analysis provided by the Colombian Ministry of Defense in February 2008, the government was in full or partial control of about 90 percent of the country in 2007 compared with about 70 percent in 2003. U.S. officials we spoke to generally agreed that the government of Colombia had made major progress reasserting its control over large parts of the country and that Colombia’s estimates of enhanced state presence were reasonably accurate. U.S. and Colombian officials and some observers agree that Plan Colombia’s counternarcotics and counterterrorism efforts have degraded the finances and operating capacity of illegal armed groups, including FARC, paramilitaries, ELN, and other drug-trafficking organizations. However, these officials also cautioned that FARC, while severely weakened, remains a threat to Colombia’s national security. FARC’s Capabilities and Finances Have Been Significantly Reduced, but It Remains a National Security Threat According to U.S. and Colombian officials and some reports, FARC’s capabilities and finances have been substantially diminished as a result of U.S. and Colombian counternarcotics efforts and continued pressure from the Colombian military. According to the Drug Enforcement Administration, since 2000, FARC has been Colombia’s principal drug- trafficking organization, accounting for approximately 60 percent of the cocaine exported from Colombia to the United States. According to ONDCP, FARC membership has declined from an estimated high of 17,000 in 2001 to an estimated force of 8,000 or less today. In June 2007, ONDCP reported that Colombia’s antidrug efforts reduced FARC’s overall profits per kilogram of cocaine from a range of $320 to $460 in 2003 to between $195 and $320 in 2005. According to State and embassy officials, and nongovernmental observers, the number of FARC combatants and its capabilities have been dramatically reduced by continuous assaults on its top leadership, the capture or killing of FARC members in conflictive zones, and a large number of desertions. In 2007, the Colombian Ministry of Defense reported that it had captured or killed approximately 4,600 FARC combatants and about 2,500 had demobilized. According to the Colombian Ministry of Defense, as of July 2008, over 1,700 FARC have demobilized this year—over two-thirds of the total for all of 2007. U.S. Military Group officials told us FARC now avoids direct combat with Colombian security forces and is limited to hit and run terrorist attacks. Nonetheless, Defense and Colombian officials caution that FARC remains a national security threat, exercising control over important parts of the country, such as Meta, which serves as a key transport corridor linking many of the coca cultivation areas in the eastern part of the country with the Pacific ports used to transport cocaine out of the country. According to U.S. military group officials, the government of Colombia’s goal is to reduce FARC’s members, finances, and operating capabilities so it no longer poses a national security threat. To achieve this goal, Colombian President Uribe has accelerated the pace of all activities to help ensure this happens by 2010 when his current term ends. However, according to U.S. Military Group officials, FARC will not reach the point where it can no longer pose a significant threat to Colombia’s government until the number of combatants is reduced to less than 4,000. In February 2008, U.S. Military Group officials told us that they estimated that this point could be reached in 18 months, but not without continued U.S. support. AUC Has Demobilized, but Remnants Remain a Threat Beginning in late 2003, AUC entered into a peace accord with the government of Colombia to demobilize and lay down its arms. From 2003 to 2006, AUC paramilitary members reported to demobilization centers around the country. According to USAID officials, approximately 32,000 paramilitary soldiers and support staff entered the demobilization process. However, according to Defense officials, former midlevel officers of AUC have taken advantage of the vacuum created by the demobilization of AUC to form or join regional criminal bands engaged in drug trafficking, which threaten to destabilize the political system and civilian security. According to a May 2007 report by the International Crisis Group, estimates of the total number of individuals involved in these criminal bands range from 3,000 to 9,000, with many of the members former AUC. These include the “Aguilas Negras,” (Black Eagles), which operates in northeastern Colombia along the border with Venezuela, and the “Nueva Generacíon Organizacíon” (New Generation Organization), which operates in the department of Nariño. According to Defense officials, while homicides and kidnappings throughout Colombia have decreased, fighting among illegal armed groups has resulted in an increase in violence and internal displacement in certain regions of the country, such as the southern Colombian department of Nariño. ELN Has Been Weakened and Drug-Trafficking Organizations Have Been Fragmented According to U.S. embassy and Colombian military officials, a number of factors, including Colombian counternarcotics efforts, military pressure, and competition with FARC, have combined to weaken ELN. According to U.S. military group officials, in 2000, ELN was estimated to number approximately 5,000 combatants; it is currently estimated to number between 2,200 and 3,000. According to the Drug Enforcement Administration, in addition to the insurgent and paramilitary groups that engage in drug trafficking, other major drug trafficking groups operate in Colombia. These include the North Valle de Cauca group based in the southwestern Colombian department of Valle de Cauca and the North Coast group based in the Caribbean cities of Cartagena, Barranquilla, and Santa Marta. According to Drug Enforcement Administration officials and reports, Colombian law enforcement successes, including the arrest and extradition of major traffickers, have helped fragment these groups, forcing them to become “niche” organizations, specializing in limited aspects of the drug trade in order to avoid being identified, arrested, and prosecuted. Nevertheless, according to a 2006 Drug Enforcement Administration report, these organizations are increasingly self-sufficient in cocaine base production, have a firm grip on Caribbean and Pacific smuggling routes, and dominate the wholesale cocaine markets in the eastern United States and Europe. State and Defense provided nearly $4.9 billion from fiscal years 2000 to 2008 to the Colombian military and police to support Plan Colombia’s counternarcotics and security objectives (see table 2). U.S. assistance to the Colombian military has focused on developing the capabilities of the Colombian Army’s Aviation Brigade and the creation of an Army Counternarcotics Brigade and mobile units that focus on counternarcotics, infrastructure protection, and counterinsurgency missions. State and Defense also provided extensive support for the Air Force’s Air Bridge Denial Program; and Navy and Marine interdiction efforts. U.S. support for the National Police has focused on its Aerial Eradication Program and Air Service. Other U.S. assistance supported the creation of mobile squadrons of rural police (referred to as “Carabineros”), which have helped establish (1) a police presence in 169 Colombian municipalities that had no police presence in 2002, and (2) specialized interdiction programs that attack cocaine labs and narcotrafficking in the ports. This support has led to a range of accomplishments since 2000 including increasing the cost of doing business for both coca farmers and drug traffickers by eradicating illicit drug crops and seizing finished product; destroying hydrochloride laboratories; demobilizing, capturing, and killing thousands of combatants; and the capture or killing of several high-profile leaders of FARC and other illegal armed groups. Program officials noted, however, that a number of challenges have diminished the effect U.S. assistance has had on reducing the flow of cocaine to the United States, including the countermeasures taken by coca farmers to mitigate the effect of U.S. and Colombian eradication programs. Since fiscal year 2000, State and Defense have provided over $844 million to help expand and maintain an Army Aviation Brigade that has seen almost a threefold increase in the number of aircraft it manages and a near doubling in its total personnel since 2000. Increased air mobility has been described by the Colombian Ministry of Defense as a key outcome of U.S. support for Plan Colombia. Air mobility is needed to conduct spray operations and move Army Counternarcotics Brigade personnel to eradication sites to provide needed security. Air mobility is also needed to transport different Colombian army units waging security operations against FARC and other illegal armed groups where rapid deployment is essential for delivering combat troops to the point of attack. The brigade consists of three fleets of helicopters. The first, referred to as the Plan Colombia Helicopter Program or PCHP, consists of 52 U.S. aircraft—17 UH-1Ns, 22 UH-IIs, and 13 UH-60L Blackhawks—that State provided to the Colombians under a no-cost lease. The second fleet, commonly referred to as the FMS fleet, consists of 20 UH-60Ls, which Colombia acquired through the Foreign Military Sales (FMS) program. The third fleet consists primarily of Russian and U.S. aircraft leased by the Army Aviation Brigade, along with aircraft that have been nationalized. State, with assistance from Defense, has provided the PCHP fleet with the essential support components needed to manage a modern combat aviation service, including infrastructure and maintenance support; contract pilots and mechanics; assistance to train pilots and mechanics; flight planning, safety, and quality standards and procedures; and a logistics system. Defense provides a Technical Assistance Field Team to support the brigade’s FMS fleet. The team is contracted to provide oversight of FMS fleet maintenance activities and to help train brigade mechanics working on these helicopters. Defense also is providing the Ministry of Defense with a logistics system and a limited aviation depot to enable the Colombians to perform certain depot-level repairs on their helicopters. Appendix II describes these support services in more detail. Figure 11 illustrates some examples. According to U.S. and Colombian officials, a key challenge facing the brigade is to train and retain enough pilots and mechanics to manage the brigade without continued U.S. support—a challenge we have noted previously. In June 2003, we reported that the Colombian Army could not maintain the PCHP helicopters because it did not have sufficient numbers of qualified pilots and mechanics. At that time, U.S. officials expected they would have enough trained entry level pilots by December 2004. They also told us that 54 maintenance personnel required basic training, but noted that it would be 3 to 5 years before these mechanics would be qualified to repair helicopters. We found that the Army Aviation Brigade is still understaffed. According to State, as of June 2008, a total of 43 contract pilots and 87 contract mechanics were needed to operate the PCHP program. U.S. officials expect that almost all of these contract personnel will be replaced with Colombian Army personnel by 2012, at which time U.S. program officials said all program support to the Army Aviation Brigade would consist of technical support. According to the Commander of the Army Aviation Brigade, however, the Colombians are buying 15 additional UH-60 Blackhawks through the FMS system for delivery starting in October 2008 and, in July 2008, the United States loaned 18 UH-1Ns from PCHP’s inventory to Colombia. These additional helicopters will strain U.S. efforts to help the Colombians ensure they have enough trained pilots and mechanics to meet their needs. Military Group and NAS officials told us that current U.S. funding and training plans can accommodate Colombia’s planned FMS purchase and the 18 loaned UH-1Ns. These officials cautioned, however, that any additional Colombian aircraft purchases will have a significant impact on future funding and training requirements. While the Colombian Army has not had difficulty retaining pilots, the lack of a dedicated career path that provides an incentive for pilots to remain with the brigade could adversely affect retention. According to a U.S. Embassy Bogotá report, the lack of a warrant officer program means that, to earn promotion, Army Aviation Brigade officers must command ground troops, taking them away from being helicopter pilots. This lack of a dedicated career path may be a problem as more junior staff progress in their careers. According to the Commander of the Army Aviation Brigade, the Colombian Army has approved plans to establish a career path for military and police aviators by creating a warrant officer program. However, the Ministry of Defense and the Colombian legislature must approve this before the program can begin. Since fiscal year 2000, State and Defense have provided over $104 million to advise, train, and equip Colombian ground forces, which grew by almost 50 percent during this period. This assistance supported the creation of an Army Counternarcotics Brigade, Army mobile units, and a Joint Special Operations Command. Each pursues various counternarcotics and counterinsurgency missions under a national joint command structure. The Army’s Counternarcotics Brigade was originally established in 1999 to plan and conduct interdiction operations against drug traffickers in southern Colombia. U.S. and Colombian officials credit the brigade with providing the security needed to conduct aerial and manual eradication operations, along with drug and precursor seizures and the destruction of base and hydrochloride laboratories. The brigade’s initial focus was on the departments of Putumayo and Caquetá where, at the time, much of Colombia’s coca cultivation was located. Subsequently, the brigade was designated a national asset capable of operating anywhere in Colombia. The brigade’s mission was also extended to include counterinsurgency operations in line with expanded program authority passed by Congress in 2002 that allowed U.S. assistance to be used for both counternarcotics and counterterrorism purposes. Defense provided the brigade with training, equipment, and infrastructure support including the construction of facilities at Tres Esquinas and Larandia, while State assistance provided the brigade with weapons, ammunition, and training. The brigade carries out ground interdiction operations and provides ground security for the National Police’s aerial and manual eradication efforts. The brigade is supported by the Army Aviation Brigade, which provides air mobility. According to State and U.S. military group officials, the brigade now provides its own training and most of its equipment. Beginning in fiscal year 2004, State reduced the amount of funding for the brigade from approximately $5 million to $2.2 million in fiscal year 2007. It is scheduled to remain at this level in fiscal year 2008. Defense provided support has helped equip mobile Army brigades and joint special forces units which, according to Defense officials, seek to establish “irreversible” security gains against FARC and other illegal armed groups. In particular, this assistance (1) enabled the Army to form mobile brigades for counterinsurgency efforts, such as Joint Task Force- Omega in central Colombia, and (2) facilitated the establishment of a Joint Special Forces Command made up of a commando unit, an urban hostage rescue unit, and a Colombian Marine special forces unit. According to Defense officials, U.S. assistance to the mobile brigades consisted primarily of intelligence and logistics support, training, weapons, ammunition, vehicles, and infrastructure support including a fortified base in La Macarena, which is the home base for Joint Task Force-Omega’s mobile units. This assistance has helped the Colombian Army conduct mobile operations throughout Colombia, a capacity that Defense officials said generally did not exist at the outset of Plan Colombia. According to a senior U.S. Military Group official, the mobile brigades’ effectiveness can be seen in the number of combatants from illegal armed groups captured and killed or who have surrendered. For example, Joint Task Force-Omega documentation provided by the Colombians show that, as of February 2008, the task force had captured over 1,000 combatants, killed almost 100, and persuaded about 400 to surrender. The United States continues to provide support for the Army’s mobile brigades, but U.S. officials expect this support to be reduced as the brigades become increasingly self-sufficient. U.S. support has helped the Colombian military establish a Joint Special Forces Command that also operates under the direction of the General Command of the Armed Forces. The support consisted of training, weapons, ammunition, and infrastructure support, including for the command’s principal compound near Bogotá. According to Defense officials, the command includes approximately 2,000 soldiers from five units made up of Colombian Army, Air Force, and Marine components. It is tasked with pursuing high-value targets and rescuing hostages in urban and rural environments. U.S. officials described this command as similar to the U.S. Special Operations Command and said that, prior to 2004, the Colombian military did not have the capability to conduct joint special forces operations. According to U.S. officials, the command has been involved in a number of high-profile operations, including the recent rescue of 15 hostages that included three U.S. citizens. In fiscal years 2000-2008, Congress provided over $115 million to help Colombia implement phase one of its infrastructure security strategy, designed to protect the first 110 miles of the nearly 500 mile-long Caño Limón-Coveñas oil pipeline from terrorist attacks. In prior years, insurgent attacks on the pipeline resulted in major economic losses for both the Colombian government and oil companies operating in the country. For instance, in 2001, the pipeline was attacked 170 times and forced to shut down for over 200 days, resulting in approximately $500 million in lost revenues, as well as considerable environmental damage. According to State, there was only one attack made on the entire length of the pipeline in 2007. U.S. support provided for both an aviation component and a ground combat support element and included two UH-60 Blackhawk helicopters, eight UH-II helicopters, and related logistics support and ground facilities. Nearly $30 million was used for U.S. Special Forces training and equipment provided to about 1,600 Colombian Army soldiers assigned to protect this portion of the pipeline. In December 2007, the United States transferred operating and funding responsibility for the infrastructure security strategy to Colombia— including nine helicopters. Beginning in fiscal year 2003, State has provided over $62 million in assistance to enable the Colombian Air Force to implement the Air Bridge Denial (ABD) program, which is designed to improve the Colombian government’s capacity to stop drug trafficking in Colombian airspace by identifying, tracking, and forcing suspicious aircraft to land so that law enforcement authorities can take control of the aircraft, arrest suspects, and seize drugs. The program was expanded in 2007 to include surveillance of Colombia’s coastal waters to strengthen the Colombian government’s capacity to address the emerging threat posed by semisubmersible vessels. To support the program, State and Defense have provided the Colombian Air Force with seven surveillance aircraft, which monitor Colombian airspace for suspicious traffic, infrastructure support at four ABD bases located across Colombia, contract aviation maintenance support, training, ground and air safety monitors, and funding for spare parts and fuel. The program also utilizes a network of U.S. detection resources including five in-country radars, over-the-horizon radars located outside Colombia, and airborne radar systems. In June 2007, the United States began nationalizing the ABD program, including transferring the title of surveillance aircraft and responsibility for operating and maintaining the five radars located in Colombia. According to NAS officials, the United States is training Colombian Air Force ground and air safety monitors and maintenance personnel and expects to nationalize the program by 2010, with only limited U.S. funding in subsequent years. According to NAS officials, suspicious aircraft tracks dropped from 637 in 2003 to 84 in 2007. In 2007, the Colombian Air Force forced three suspected drug-trafficking aircraft to land and each aircraft was seized; however, according to a senior NAS official, the crews escaped, and no cocaine was found. In the same year, the ABD program was expanded to include a maritime patrol mission. While conducting a maritime patrol, ABD aircraft assisted in the sinking of two self-propelled semisubmersibles, which resulted in the arrest of seven individuals and the seizure or destruction of approximately 11 metric tons of cocaine. In our September 2005 report, we noted that the stated purpose of the program (the seizure of aircraft, personnel, and drugs) was rarely achieved, though the program did succeed in reducing the number of suspicious flights over Colombia—a valuable program outcome, according to U.S. and Colombian officials. Since fiscal year 2000, State and Defense provided over $89 million to help sustain and expand Colombian Navy and Marine interdiction efforts. According to Defense, from January to June 2007, an estimated 70 percent of Colombia’s cocaine was smuggled out of the country using go-fast vessels, fishing boats, and other forms of maritime transport. State and Defense support for the Colombian Navy is designed to help improve their capacity to stop drug traffickers from using Colombia’s Caribbean and Pacific coasts to conduct drug-trafficking activities. State and Defense support for the Colombian Marines is designed to help gain control of Colombia’s network of navigable rivers, which traffickers use to transport precursor chemicals and finished products. According to Colombian Ministry of Defense officials, the number of metric tons of cocaine seized by the Navy and Marines represented over half of all cocaine seized by Colombia in 2007. State and Defense assistance to the Colombian Navy provided for infrastructure development (such as new storage refueling equipment for the Navy station in Tumaco), the transfer of two vessels to Colombia, eight “Midnight Express” interceptor boats, two Cessna Grand Caravan transport aircraft, weapons, fuel, communications equipment, and training. State assistance also helped the Colombian Navy establish a special intelligence unit in the northern city of Cartagena to collect and distribute time-sensitive intelligence on suspect vessels in the Caribbean. In 2007, the unit coordinated 35 interdiction operations, which resulted in the arrests of 40 traffickers, the seizure of over 9 metric tons of cocaine, and the seizure of 21 trafficker vessels including one semisubmersible vessel. The U.S. Embassy Bogotá credits this unit for over 95 percent of all Colombian Navy seizures in the Caribbean, forcing traffickers to rely more on departure sites along the Pacific Coast and areas near Venezuela and Panama. The Colombian Navy faces certain challenges. First, it generally lacks the resources needed to provide comprehensive coverage over Colombia’s Pacific coastline. For example, according to Colombian Navy officials, the Navy has only three stations to cover all of Colombia’s Pacific coastline. Second, according to U.S. Embassy Bogotá officials, these services lack adequate intelligence information to guide interdiction efforts along the Pacific coast. According to embassy officials, the United States is working with the Colombians to expand intelligence gathering and dissemination efforts to the Pacific coast, in part by providing support to expand the Navy’s intelligence unit in Cartagena to cover this area. Third, traffickers have increasingly diversified their routes and methods, including using semisubmersibles to avoid detection. For the Colombian Marines, State and Defense provided support for infrastructure development (such as docks and hangars), 95 patrol boats, weapons, ammunition, fuel, communications equipment, night vision goggles, and engines. Colombia’s rivers serve as a vital transport network and are used to transport the precursor chemicals used to make cocaine and heroin, as well as to deliver the final product to ports on Colombia’s Caribbean and Pacific coasts. According to State, up to 40 percent of the cocaine transported in Colombia moves through the complex river network in Colombia’s south-central region to the southwestern coastal shore. According to U.S. Southern Command officials, the key challenge facing the riverine program is a general lack of resources given the scope of the problem. The Colombian marines maintain a permanent presence on only about one-third of Colombia’s nearly 8,000 miles of navigable rivers. U.S. embassy planning documents have set a goal of helping the Colombian Marines achieve a coverage rate of at least 60 percent by 2010. Since the early 1990s, State/INL has supported the Colombian National Police Aerial Eradication Program, which is designed to spray coca and opium poppy. Since fiscal year 2000, State has provided over $458 million to support the program, which increased its spray operations about threefold. The Aerial Eradication Program consists of U.S.-owned spray aircraft and helicopters, as well as contractor support to help fly, maintain, and operate these assets at forward operating locations throughout Colombia. As of August 2008, these aircraft included 13 armored AT-802 spray aircraft; 13 UH-1N helicopters used as gunships or search and rescue aircraft; four C-27 transport aircraft used to ferry supplies and personnel to and from the various spray bases; and two reconnaissance aircraft used to find and identify coca cultivation, and plan and verify the results of spray missions. A typical spray mission consists of four spray aircraft supported by helicopter gunships to protect the spray aircraft along with a search and rescue helicopter to rescue downed pilots and crew. In addition, ground security is provided as needed by the Army Counternarcotics Brigade. U.S. funded counternarcotics efforts, which focused on aerial spraying, did not achieve Plan Colombia’s overarching goal of reducing the cultivation, production, and distribution of cocaine by 50 percent, in part because coca farmers responded with a series of effective countermeasures. These countermeasures included (1) pruning coca plants after spraying; (2) re- planting with younger coca plants or plant grafts; (3) decreasing the size of coca plots; (4) interspersing coca with legitimate crops to avoid detection; (5) moving coca cultivation to areas of the country off-limits to spray aircraft, such as the national parks and a 10 kilometer area along Colombia’s border with Ecuador; and (6) moving coca crops to more remote parts of the country—a development that has created a “dispersal effect” (see figures 12 and 13). While these measures allowed coca farmers to continue cultivation, they have increased the coca farmers and traffickers’ cost of doing business. NAS officials said Colombia and the United States have taken several actions to address this issue. For instance, the government of Colombia initiated a program in 2004 to manually eradicate coca. Since 2004, the amount of coca manually eradicated increased from about 11,000 hectares to about 66,000 hectares in 2007. According to NAS officials, in response to congressional budget cuts in fiscal year 2008, the embassy reduced its aerial eradication goal to 130,000, compared with 160,000 in 2007. This reduction may be offset by a planned increase in manual eradication efforts from 66,000 hectares in 2007 to 100,000 hectares in 2008. However, manual eradication efforts require significant personnel, security, and transportation, including air mobility resources. Through the end of May 2008, Colombia reported that about 28,000 hectares had been manually eradicated. In addition, manual eradication often takes place in conflictive areas against a backdrop of violence, which makes full implementation of this strategy even more problematic. According to State, despite protection measures taken, manual eradicators were attacked numerous times—by sniper fire, minefields, and improvised explosive devices—and through August 2008, 23 eradicators were killed, bringing to 118 the total number of eradicators killed since 2005. National Police Air Service Since fiscal year 2000, State provided over $463 million to help expand and sustain the Police Air Service (known by its Spanish acronym, ARAVI). Similar to the role played by the Army Aviation Brigade, ARAVI provides air mobility support for a range of National Police functions including aerial and manual eradication efforts that require gunship and search and rescue support for the spray planes, as well as airlift support for the manual eradication teams and associated security personnel. In addition, ARAVI provides airlift for the National Police’s commandos unit, known as Junglas. According to NAS officials, ARAVI consists of 61 NAS-supported aircraft and 30 National Police-supported aircraft. Key program support elements include hanger and taxiway construction upgrades to the Air Service’s operating base outside of Bogotá; the provision of contract mechanics; training; and funding for spare parts, fuel, and other expenses. Appendix II describes these support services in more detail. According to NAS officials, in addition to enabling ARAVI to better manage its aviation assets, ARAVI has become self-sufficient in some areas. For instance, it provides its own entry-level pilot and mechanic training and can plan and execute its own operations. However, U.S. and contractor officials said that ARAVI still continues to suffer from major limitations. According to NAS and contractor officials, ARAVI: Receives approximately 70 percent of its total maintenance and operating funding from State. According to Embassy Bogotá officials, the Colombian Ministry of Defense often underfunds the service on the assumption that State will make up the difference. Lacks some specialized maintenance personnel. For instance, according to State-funded U.S. contractor personnel, in February 2008, the service only had about half of the required number of quality control inspectors. To make up the shortfall, the service relies on quality control inspectors provided by the contractor. Has high absentee rates. This is a problem that we have reported on in the past. For example, according to data supplied by the contractor, during the second week of February 2008, only 25 percent of the technicians and 40 percent of the assigned inspectors were present to service ARAVI’s UH- 60s. Since fiscal year 2000, State provided over $153 million to strengthen the National Police’s efforts to interdict illicit drug trafficking. According to State, in fiscal year 2007, it focused most of its assistance on equipping and training the Junglas, but also provided assistance for maritime, airport, and road interdiction programs. The Junglas consist of 500 specially selected police divided into three companies based at Bogotá, Santa Marta, and Tulua, as well as a 60-man instructor group based at the National Police rural training center. Described by U.S. Embassy Bogotá officials as being widely considered as one of the best trained and equipped commando units in Latin America, they are often the unit of choice in operations to destroy drug production laboratories and other narcoterrorist high value targets, many of which are located in remote, hard-to-find locations. State support for the Junglas consisted of specialized equipment typically provided to U.S. Army Special Forces teams, such as M-4 carbines, mortars, helmets, and vests, as well as specialized training provided in Colombia and the United States. According to State, in 2006 and 2007, the Junglas were responsible for more than half of all the hydrochloric and coca base laboratories destroyed by the National Police, and seized over 64 metric tons of cocaine during the same period. State also supported the National Police’s maritime and airport security programs to strengthen the National Police’s capability to protect against illicit cargo—primarily narcotics—smuggled through Colombia’s principal seaports and airports. State assistance included funding for training, technical assistance, and limited logistical support (including K-9 support) for port security units at eight Colombian seaports and six airports. According to State, units based at Colombia’s principal seaports and airports seized more than 13 metric tons of illicit drugs in 2006; a figure that rose to over 22 metric tons in 2007. Since fiscal year 2000, the United States provided over $92 million to help the Colombians establish Carabineros squadrons. The Carabineros were initially created to provide an immediate State presence in conflictive areas reclaimed by the Colombian military. According to State, the Colombians currently have 68 Carabineros squadrons, each staffed with 120 personnel. The squadrons provide temporary support as other government services and a permanent police presence are established in reclaimed areas. State support consisted of training, weapons, ammunition, night vision goggles, metal detectors, radios, vehicles, and other items including some limited support for permanent police stations The Carabineros supported President Uribe’s goal of re-establishing a State presence in each of the country’s 1,099 municipalities (169 municipalities had no police presence prior to 2002). Though a July 2007 U.S. Embassy Bogotá report noted there are now police stations in every municipality throughout Colombia, these often consist of a small number of police who are responsible for areas covering hundreds of square miles of territory. Despite these limitations, State noted that in contrast to earlier years, no police stations were overrun in 2007. NAS officials attributed this development to improved based defense training, defensive upgrades, and the increased police presence that Carabinero squadrons provide in rural areas. Since fiscal year 2000, the United States has provided nearly $1.3 billion for nonmilitary assistance to Colombia, focusing on the promotion of (1) economic and social progress and (2) the rule of law, including judicial reform. To support social and economic progress, the largest share of U.S. nonmilitary assistance has gone toward alternative development, which has been a key element of U.S. counternarcotics assistance and has bettered the lives of hundreds of thousands of Colombians. Other social programs have assisted thousands of internally displaced persons (IDPs) and more than 30,000 former combatants. Assistance for the rule of law and judicial reform have expanded access to the democratic process for Colombian citizens, including the consolidation of state authority and the established government institutions and public services in many areas reclaimed from illegal armed groups. (See table 3.) Nevertheless, these programs face several limitations and challenges. For example, the geographic areas where alternative development programs operate are limited by security concerns, and programs have not demonstrated a clear link to reductions in illicit drug cultivation and production. In addition, many displaced persons may not have access to IDP assistance, the reintegration of former combatants into society and reparations to their victims has been slow, and funding to continue these programs is a concern. Finally, Colombia’s justice system has limited capacity to address the magnitude of criminal activity in Colombia. USAID provided more than $500 million in assistance between fiscal years 2000 and 2008 to implement alternative development projects, which are a key component of the U.S. counternarcotics strategy in Colombia. USAID’s goal for alternative development focuses on reducing the production of illicit narcotics by creating sustainable projects that can function without additional U.S. support after the start-up phase is implemented. In recent years, USAID modified its alternative development strategy to emphasize sustainability. With regard to its strategic goal, alternative development projects face two key challenges—USAID currently has almost no alternative development projects in areas where the majority of coca is grown, and a government of Colombia policy prohibits alternative development assistance projects in communities where any illicit crops are being cultivated. USAID’s original alternative development strategy in 2000 focused on encouraging farmers to manually eradicate illicit crops and providing assistance to those who did through licit, short-term, income-producing opportunities. These efforts were concentrated in the departments of Caquetá and Putumayo, where, at the time, most of Colombia’s coca was cultivated and where U.S. eradication efforts were focused. However, USAID and its implementing partners found it difficult to implement projects in the largely undeveloped south where the Colombian government exercised minimal control. As a result, in February 2002, USAID revised its approach to support long-term, income-generating activities, focus more attention and resources outside southern Colombia, and encourage private-sector participation. In 2004, we reported that the revised alternative development program had made progress but was limited in scope and may not be sustainable. USAID revised its alternative development strategy beginning in 2006 to focus on specific geographic corridors, improve coordination, and increase the likelihood of achieving sustainable projects. The geographic corridors are in six regions in the western part of Colombia where the government has greater control and markets and transportation routes are more developed. However, the corridors are not in primary coca cultivation areas. USAID officials told us that the alternative development corridors are intended to act as a magnet, providing legal economic opportunities to attract individuals from regions that cultivate illicit crops, while also preventing people within the corridors from cultivating coca. USAID’s current strategy is carried out through two major projects—Areas for Municipal Level Alternative Development (ADAM) and More Investment for Sustainable Alternative Development (MIDAS). ADAM works with individuals, communities, and the private sector to develop licit crops with long-term income potential, such as cacao and specialty coffee. ADAM also supports social infrastructure activities such as schools and water treatment plants, providing training, technical assistance, and financing of community projects. It emphasizes engagement with communities and individual beneficiaries to get their support and focuses on smaller scale agricultural development with long-term earning potential. For example, under ADAM, USAID provided infrastructure improvements to a facility that processes blackberries in order to increase capacity and continues to provide technical assistance to farmers who grow blackberries for the facility. MIDAS promotes private-sector led business initiatives and works with the Colombian government to make economic and policy reforms intended to maximize employment and income growth. USAID encourages public and private-sector investment in activities that raise rural incomes and create jobs, and it provides training and technical assistance to the Colombian government at the local and national levels to expand financial services into rural areas, build capacity of municipal governments, and encourage the Colombian government’s investment in programs. For example, MIDAS worked with the Colombian government to lower microfinance fees and provided technical assistance to private lenders, which led to increased availability of small loans in rural areas that can be used to start up small- and medium-sized businesses. Overall, alternative development beneficiaries we talked with told us their quality of life has improved because they faced less intimidation by FARC and had better access to schools and social services, even though they generally earned less money compared with cultivating and trafficking in illicit drugs. One challenge facing alternative development programs is their limited geographic scope. Alternative development programs are largely focused in economic corridors in the western part of Colombia, where, according to USAID officials, a greater potential exists for success due to access to markets, existing infrastructure, and state presence and security. Currently, USAID has almost no alternative development projects in eastern Colombia, where the majority of coca is grown. (See fig. 14.) While the majority of the Colombian population lives within the USAID economic corridors, the lack of programs in eastern Colombia nonetheless poses a challenge for linking alternative development to reducing the production of illicit drugs. The USAID Mission Director told us that the mission intends to expand the geographic scope of alternative development programs as the government of Colombia gains control over conflictive areas. However, the lack of transportation infrastructure in most coca growing areas limits the chances of program success and future expansion. USAID and other U.S. Embassy Bogotá officials emphasized that alternative development programs have benefited from security gains made possible through the Colombian military’s enhanced air mobility, but large areas of Colombia are still not secure. According to USAID officials, another challenge is the government of Colombia’s “Zero Illicit” policy, which prohibits alternative development assistance projects in communities where any illicit crops are being cultivated. Acción Social officials said the policy is intended to foster a culture of lawfulness and encourage communities to exert peer pressure on families growing illicit crops so that the community at large may become eligible for assistance. However, USAID officials expressed concern that the policy limits their ability to operate in areas where coca is grown. The policy also complicates USAID’s current strategy of working in conflictive areas like Meta, where coca is cultivated in high concentrations. One nongovernmental organization official told us the policy is a major problem because if one farmer grows coca in a community otherwise fully engaged in and committed to growing licit crops, then all aid is supposed to be suspended to that community. However, USAID officials told us programs have only been suspended a few times due to this requirement. USAID collects data on 15 indicators that measure progress on alternative development; however, none of these indicators measures progress toward USAID’s goal of reducing illicit narcotics production through the creation of sustainable economic projects. Rather, USAID collects data on program indicators such as the number of families benefited and hectares of legal crops planted. While this information helps USAID track the progress of projects, it does not help with assessing USAID’s progress in reducing illicit crop production or its ability to create sustainable projects. In 2004, USAID officials said a new strategy was being developed that would allow for the creation of new performance measures. But, USAID did not develop indicators that are useful in determining whether alternative development reduces drug production. For example, while USAID intends for coca farmers in eastern Colombia to move to areas with alternative development projects, USAID does not track the number of beneficiaries who moved out of areas prone to coca cultivation. In addition, while the current alternative development strategy is designed to produce sustainable results, USAID does not collect tracking data on beneficiaries who have received assistance to determine whether they remain in licit productive activities or which projects have resulted in sustainable businesses without government subsidies. The contractor responsible for implementing USAID’s alternative development programs told us USAID does not monitor the necessary indicators and, therefore, cannot determine the extent to which projects are contributing to reducing coca cultivation or increasing stability. Since fiscal year 2000, State’s Population Refugee and Migration (PRM) bureau reports it has provided $88 million in short-term, humanitarian assistance to support IDPs and other vulnerable groups (such as Afro- Colombians and indigenous peoples). PRM provides humanitarian assistance for up to 3 months after a person is displaced, providing emergency supplies as well as technical assistance and guidance to the government of Colombia and local humanitarian groups to build their capacity to serve IDPs. In addition, from fiscal years 2000 to 2007, USAID has provided over $200 million for longer term economic and social assistance to support IDPs and vulnerable groups. USAID assistance has focused on housing needs and generating employment through job training and business development and has also included institutional strengthening of Colombian government entities and nongovernmental organizations through technical assistance and training in areas such as delivery of housing improvements and subsidies and the provision of health care. According to USAID, more than 3 million people have benefited from this assistance. However, according to State and USAID officials, the number of newly displaced persons in Colombia continues to rise, and it can be difficult to register as an IDP. According to the United Nations High Commissioner for Refugees, Colombia has up to 3 million IDPs—the most of any country in the world. Acción Social reports it has registered over 2.5 million IDPs. But State PRM officials report that international and non-governmental organizations estimate that between 25 and 40 percent of IDPs are not registered. Acción Social officials disagreed and estimated under- registration to be 10 percent. In any case, Acción Social officials said that the agency’s budget is not sufficient to provide assistance to all the IDPs registered. In 2003, the Colombian government and AUC entered into a peace accord to demobilize. State data indicate the United States has provided over $44 million for USAID programs for monitoring and processing demobilized AUC combatants, the verification mission of the Organization of the American States, reparations and reconciliation for victims of paramilitary violence, and the reintegration of adult and child ex-combatants into Colombian society. USAID also supports the National Commission on Reparation and Reconciliation, which was created to deliver reparations and assistance to victims. From 2003 to 2006, according to USAID, approximately 32,000 AUC members demobilized. Most were offered pardons for the crime of raising arms against the Colombian state and were enrolled in a government of Colombia reintegration program. AUC leaders and soldiers who had been charged, arrested, or convicted of any major crime against humanity (such as murder and kidnapping) were offered alternative sentencing in exchange for providing details of crimes in depositions to Colombian officials. USAID assisted the government of Colombia in the creation of 37 service centers, mostly in large cities, at which ex-combatants could register for health services, job training, and education and career opportunities, and has assisted the service centers in tracking the demobilized soldiers’ participation in the reintegration process. USAID also assisted with AUC identity verification, criminal record checks, initial legal processing, documentation of biometric data (such as pictures, thumbprints, and DNA samples), and issuance of a registration card. U.S. and Colombian officials report that the AUC demobilization has enhanced security through reductions in murders, displacements, and human rights abuses. Depositions have uncovered thousands of crimes, hundreds of former combatants are serving jail sentences for their crimes, and victims of paramilitary violence are beginning to see resolution to crimes committed against them and their families. In April 2008, the government of Colombia began allowing some FARC deserters to receive benefits similar to those received by AUC. FARC ex- combatants who cooperate with Colombian authorities may receive pardons; enter a reintegration program; and have access to training, medical benefits, and counseling. Despite the progress made, Colombian and USAID officials told us the reintegration of demobilized combatants has been slow, and many may have returned to a life of crime. The reintegration program is the primary system to prevent the demobilized from joining the ranks of criminal gangs. However, USAID officials estimate that approximately 6,000 of the demobilized have not accessed the service centers. Moreover, Colombian officials told us many businesses have been reluctant to hire the ex- combatants, and the majority has not found employment in the formal economy. Criminal gangs recruit heavily from the ranks of the demobilized, and Colombian officials estimate about 10 percent (or 3,000) have joined these illegal groups. In addition, a senior Colombian official reported that reparations to the victims of paramilitary violence have been slow. Ex-combatants have not been forthcoming about illegally obtained assets—which can be used to pay for reparations—and often hide them under the names of family or acquaintances. Victims of paramilitary violence have criticized the reparations process as slow and expressed resentment of the benefits paid to demobilized paramilitaries under the reintegration program. Initially, victims could not receive reparations unless there was a conviction, which required a lengthy judicial process. But, in April 2008, Colombia began to provide compensation to over 120,000 paramilitary victims without the requirement for a conviction. Since fiscal year 2000, State data indicates that USAID has provided over $150 million to support the rule of law in Colombia through human rights protection, the creation of conflict resolution centers, and training of public defenders, among other activities. USAID has provided more than 4,500 human rights workers protection assistance such as communications equipment and bullet proof vests, as well as technical assistance, training, equipment, and funding to programs that protect union leaders, journalists, mayors, and leaders of civil society organizations. USAID also created and provides assistance to Colombia’s Early Warning System, to alert authorities of violent acts committed by illegally armed groups. According to USAID, since its inception in 2001, the Early Warning System has prevented over 200 situations that may have caused massacres or forced displacements. By the end of 2007, USAID achieved its goal of creating 45 justice sector institutions known as Justice Houses, and has trained over 2,000 conciliators who help to resolve cases at Justice Houses; these conciliators have handled over 7 million cases, relieving pressure on the Colombian court system. USAID has also refurbished or constructed 45 court rooms to ensure they are adequate for oral hearings under the new criminal justice system, and is developing 16 “virtual” court rooms, by which the defendant, judges, prosecutors, and public defenders can all participate via closed circuit television. USAID has trained 1,600 public defenders since 2003, including training in a new criminal procedure code, and the Colombian government now pays all of the defenders’ salaries. However, these programs face challenges in receiving commitments from the Colombian government and addressing shortfalls in equal access to justice for all Colombians. USAID officials expressed concern about the Colombian government’s ability to fund the Early Warning System— USAID currently pays 95 to 98 percent of the salaries. According to USAID officials, a letter of understanding between USAID and the Colombian government calls for Colombia to pay 100 percent in 2011. In addition, the 45 Justice Houses in Colombia are located in large cities primarily in the western half of the country, with almost no Justice Houses in the less populated eastern half of the country where high rates of violence and crime occur. However, USAID plans to assist the Colombian government in strengthening state presence in rural areas of Colombia through the construction of 10 new regional Justice Houses in rural, post conflict areas. Since the beginning of 2007, USAID and Defense have committed $28.5 million for two programs that support Colombia’s “Clear, Hold and Consolidate” strategy: (1) the Regional Governance Consolidation Program and (2) the Initial Governance Response Program. Both programs directly support the Coordination Center for Integrated Government Action (CCAI), which was created in 2004 to integrate several military, police, and civil agencies and to coordinate national-level efforts to reestablish governance in areas that previously had little or no government presence. USAID works to increase the operational capacity of CCAI by providing direct planning and strategic assistance; for example, USAID hired a consulting firm to develop a detailed operational plan for CCAI’s activities in Meta. USAID also assists CCAI with projects designed to reinforce stability in areas formerly controlled by insurgents and quickly build trust between the government and local communities in Meta—such as La Macarena. USAID officials said Colombia’s consolidation strategy may serve as a model for future program activities throughout Colombia; however, CCAI faces challenges that could limit its success. CCAI does not have its own budget and relies on support, funding, and personnel from other agencies within the Colombian government. While Defense officials estimate that CCAI spent over $100 million from Colombian government agencies in 2007, it often faced delays in receiving the funding. Also, security remains a primary concern for CCAI because it operates in areas where illegal armed groups are present. For example, CCAI representatives in La Macarena do not travel outside of a 5-kilometer radius of the city center due to security concerns. Justice has provided over $114 million in fiscal years 2000 through 2007 for programs intended to improve the rule of law in Colombia, primarily for the transition to a new criminal justice system and training and related assistance for investigating human rights crimes and crimes confessed to by former combatants during the AUC demobilization. About $42 million was for training, technical assistance, and equipment to support the implementation of a new accusatory criminal justice system. In 2004, Colombia enacted a new Criminal Procedure Code, which began the implementation of an oral accusatory system involving the presentation and confrontation of evidence at oral public trials, similar to the system used in the United States. Justice training has included simulated crime scenes and court proceedings to develop the necessary legal and practical understanding of the oral accusatory system. Justice reports it has trained over 40,000 judges, prosecutors, police investigators, and forensic experts in preparation for their new roles. According to Justice, the new accusatory system has improved the resolution of criminal cases in Colombia. Under the old system, trials took an average of 5 years; this has been reduced to 1 year under the current system. According to Justice, the new system has led to an increase in the conviction rate of 60 to 80 percent, with Colombia reporting 48,000 convictions in the first 2 years of implementation. Furthermore, the number of complainants and witnesses increased since implementation, which suggests a greater public confidence in the new system. Justice also provided about $10 million for fiscal years 2005 to 2007 to both the Fiscalia’s Justice and Peace Unit and Human Rights Unit to support the AUC demobilization under the Justice and Peace Process. The Justice and Peace Unit oversees the process through which demobilized paramilitaries give depositions that detail their knowledge of the paramilitary structure and of crimes such as mass killings or human rights abuses. Justice has provided more than $2 million in equipment, including video recording technology, to aid in the processing of approximately 5,000 depositions at the Justice and Peace offices in Bogotá, Medellin, and Barranquilla. The unit also collects and processes complaints filed by victims of paramilitary violence. The Human Rights Unit is tasked with the investigation and prosecution of human rights violations, such as attacks on union leaders, forced disappearances, and mass graves, as well as the investigation and prosecution of demobilized paramilitary members suspected of human rights violations. According to Colombian officials, depositions have led to the confession of over 1,400 crimes that the government had no prior knowledge of, as well as the locations of an estimated 10,000 murder victims in 3,500 grave sites. Over 1,200 victims’ remains have been recovered through exhumations, and the human identification labs continue to work on the identification of the remains using DNA testing. According to Justice, the depositions of 25 paramilitary leaders have been initiated and, in May 2008, 15 leaders were extradited to the United States. The Justice and Peace Unit has received over 130,000 victims’ claims. Justice also provided about $10 million from fiscal years 2005 to 2007 to increase the capacity for the Colombian government to investigate criminal cases. Justice provided vehicles and funds for investigators to travel to crime scenes and collect evidence; specialized forensic training and equipment for Colombian exhumation teams that unearth victims’ remains based on information uncovered in depositions; and training, technical assistance, and DNA processing kits to Colombian human identification labs to streamline and improve DNA identification efficiency. Justice is also funding a project to collect DNA samples from 10,000 demobilized AUC members and enter the data into a DNA identification database, which could later be compared with DNA found at crime scenes. Additionally, funds were allocated to contract 30 attorneys to assist with the analysis and processing of thousands of complaints from paramilitary victims. Finally, Justice provided specialized criminal training in the areas of money laundering and anticorruption. Despite U.S. assistance toward improving Colombian investigative and prosecutorial capabilities, Colombian officials expressed concern that they lack the capacity to pursue criminal cases due to a lack of personnel, air mobility, and security, particularly given that most of the paramilitary killings and other AUC crimes occurred in rural areas too dangerous or too difficult to reach by road. In particular: Fiscalia and Justice officials said neither the Justice and Peace Unit nor the Human Rights Unit have enough investigators and prosecutors to fully execute their missions. For example, 45 prosecutors from the Human Rights Unit have to cover more than 4,000 assigned cases. From 2002 to 2007, the unit produced less than 400 convictions. Further, thousands of depositions and victim complaints, which Colombian officials say are likely to reveal additional crimes, have yet to be processed by the Fiscalia. As of October 2007, over 3,000 known grave sites had not been exhumed and less than half of the recovered human remains had been identified. Justice has provided assistance to expand the unit, including regional units in 7 cities outside of Bogotá. Moreover, Justice reported in September 2008 that the Human Rights Unit has received an additional 72 prosecutors and 110 investigators, but noted that more investigators are needed. According to Colombian and U.S. officials, criminal and human rights investigations and exhumation of graves often require hours and sometimes days to complete. The investigators often have to go to conflictive areas that are impossible to access without sufficient transportation resources. For example, in remote areas investigators often need army or police helicopters. The Colombian National Policehave programmed over 15,600 flying hours for their helicopters for 2008; however, police officials stated that none of these hours were allocated for Fiscalia investigations. U.S. officials confirmed Fiscalia’s need for additional transportation resources, including funding for commercial transportation as well as assets provided by Colombian security forces. From the outset of Plan Colombia, Congress made clear that it expects all U.S. support programs would eventually transition to Colombia. With the completion of Plan Colombia and the start-up of its second phase, Congress reiterated this guidance and called on State and other affected agencies to increase the pace of nationalization with a focus on the major aviation programs under Plan Colombia that are largely funded by State. In response to this guidance and budget cuts to fiscal year 2008 military assistance to Colombia instituted by Congress, State and Defense have accelerated efforts to nationalize or partly nationalize the five major Colombian military and National Police aviation programs supported by the United States. Apart from these efforts, State has taken action to nationalize portions of its nonaviation program support, and State and Defense are seeking to transfer a portion of the assistance Defense manages in other program areas to the Colombians by 2010. Justice and USAID view their efforts as extending over a longer period than U.S. support to the Colombian military and have not yet developed specific nationalization plans; however, each agency is seeking to provide its Colombian counterparts with the technical capabilities needed to manage program operations on their own. U.S. nationalization efforts collectively face the challenges of uncertain funding levels and questions pertaining to Colombia’s near-term ability to assume additional funding responsibilities. State has initiated the transfer of program funding and operations for the Army Aviation Brigade to the Colombians—by far the largest aviation program funded by State. Nationalization efforts have centered on a contractor reduction plan created by State in 2004 to eliminate the Colombians’ reliance on U.S. contract pilots and mechanics (see fig. 18). This process, however, will not be completed until at least 2012 when State expects the Colombians will have enough trained pilots and mechanics to operate the brigade on their own. Contract pilot and mechanic totals provided by State indicate that the plan is on track. U.S. officials added that the transfer of U.S. titled aircraft and the termination of U.S. support for other costs, such as parts and supplies, will occur by 2012 as part of this plan. In contrast to the Army Aviation Brigade, State has not developed contractor reduction plans for the National Police’s Air Service or Aerial Eradication Program—the second and third largest aviation programs supported by State, which work together to address U.S. and Colombian counternarcotics objectives. U.S. Embassy and State program officials explained that State’s assistance to the police is expected to continue for the indefinite future, subject to congressional funding decisions, to sustain a partnership with the police which predates Plan Colombia. However, State has taken certain steps, such as training Colombian mechanics to replace contract personnel, to reduce the Colombian’s dependence on U.S. assistance. As of June 2008, only 3 of the Colombian National Police’s Air Service 233 pilots were contract personnel, while 61 out of 422 mechanics were contractors. For the Colombian National Police’s Aerial Eradication Program, as of June 2008, 61 out of 76 pilots were contract personnel, while 166 out of 172 mechanics were contract staff. NAS plans to institute a series of efforts, including the training of spray plane mechanics, to increase the ability of the Colombians to assume a greater share of total program costs. U.S. nationalization efforts were accelerated in the wake of the fiscal year 2008 budget cuts instituted by Congress but remain focused on State funded aviation programs. Based on discussions with the Colombians beginning in 2007, the United States identified six key elements of NAS aviation programs as a starting point for accelerated nationalization efforts, which supplement the steps described above. As show in table 4, these six areas cut across U.S. supported aviation programs in Colombia. U.S. Embassy Bogotá officials estimated that these actions could result in nearly $70 million in annual program savings. NAS is currently seeking to identify additional budget savings by reducing its aerial spray program and through a wide assortment of “efficiencies” they expect to implement. State officials noted that these reductions and efficiencies will lead to diminished eradication and interdiction results. State has made significant progress in nationalizing nonaviation programs, including support for interdiction efforts (seaport, airport security, base security and roads, and Junglas operations) and programs designed to extend the Colombian government’s presence throughout the country (mainly, reestablishing a National Police presence in all municipalities); and an individual deserter program, which supplements the formal demobilization and re-integration programs managed by USAID. NAS largely describes all of these programs, with regards to U.S. funded employee or contractor involvement, as being fully nationalized or nearly nationalized with only limited U.S. oversight or technical assistance being provided. Defense nationalization efforts are managed by the U.S. military group in Bogotá. A senior military group official noted that Defense’s nationalization efforts are based on limited draw downs in Defense managed funds which include both State FMF funds and Defense’s counternarcotics budget. The U.S. government is seeking to establish a strategic partnership with Colombia by 2010 whereby the Colombian Ministry of Defense will accelerate its efforts to assume increased funding and management responsibilities for programs currently supported with U.S. military assistance. The same official noted that the Military Group has closely coordinated this nationalization strategy with the Colombian military at all levels since November 2007. According to Defense officials, the 2008 cuts in FMF and Defense funding led to a reexamination of plans to transition some program funding and implementation responsibilities to the Colombians. In line with this reexamination, the U.S. Military Group in Bogotá and State’s Bureau for Political and Military Affairs are developing a report to Congress which will detail their strategy to reduce FMF and Defense counternarcotics support over the next several years with an initial focus on 2010 when it is hoped the Colombians will reach a point of “irreversibility” with regards to security advances against the FARC and other illegal armed groups. USAID and Justice are focusing on the sustainability of projects and providing the Colombians with the technical capabilities to manage their own programs; however, neither agency has developed comprehensive transition plans. USAID and Justice efforts to transfer program and funding responsibilities differ significantly from State and Defense since, with limited exceptions, they do not have physical assets to turn over to the Colombians. Rather, their efforts center on training and capacity building to allow the Colombians to ultimately manage their own programs. USAID efforts focus on developing sustainable nonmilitary assistance programs, increasing the capacity of the government of Colombia to design and manage similar projects, and transferring any support activities, as warranted. USAID is seeking to create sustainable projects, in part, by increasing financial participation by the Colombian government, private sector, and project beneficiaries. For example, USAID alternative development projects are funded 70 percent to 90 percent by outside groups, on average, and have over $500 million in public and private funds. USAID is also seeking to increase the Colombians’ ability to design and manage their own assistance programs by involving relevant government of Colombia staff in project design and implementation activities. For example, USAID provides technical assistance to the government of Colombia on financial policy reforms that seek to expand financial services to underserved groups and isolated regions. USAID also provides training to Colombian banks, credit unions, and nongovernmental organizations to establish and expand financial services for these groups. USAID has made efforts to transfer specific program operations and funding responsibilities for several projects. For example, USAID is transferring the Human Rights Early Warning System, which was originally funded entirely by USAID. Under an agreement, the government of Colombia currently funds 30 percent of this program and is supposed to assume full operational and financial responsibilities of this program in 2011. In addition, USAID will now contribute no more than 50 percent toward the construction of Justice Houses, which were initially constructed entirely with USAID funds. Justice efforts focus on building the capacity of the Colombian government in several areas, such as increasing the ability of the government to investigate and prosecute crimes, as well as provide protection to witnesses and legal personnel. Justice officials describe the process as one of creating an enduring partnership with the Colombian government through the provision of training and technical assistance. Justice conducts many “train the trainers” programs designed to enhance the ability of the Colombian government to continuously build institutional knowledge in certain program areas. Both U.S. and Colombian officials said the congressionally mandated cuts to military assistance in 2008 and uncertainty over future years’ funding complicate the process of planning and implementing nationalization efforts. In addition, while Colombia’s economic outlook has improved in recent years, its ability to appropriate funds quickly or reallocate funds already approved is limited. State noted in its April 2008 multiyear strategy report to Congress that the fiscal year 2008 budget significantly changed the mix of U.S. assistance to Colombia by reducing eradication, interdiction, and FMF programs and increasing support for economic development, rule of law, human rights, and humanitarian assistance. The report notes agreement with Congress on the importance of increasing support for nonmilitary programs, but State expressed concern regarding Colombia’s ability to use this assistance without the security that air mobility assets provide. The report also notes State’s concern about the need to “ensure a smooth and coordinated transition of financial and operational responsibilities to the government of Colombia for interdiction, eradication, and counterterrorism programs.” The Colombian Vice Minister of Defense stressed that the budget cuts mandated by Congress could not be fully absorbed within Colombia’s current budget cycle and added that the Ministry of Defense is severely restricted in its ability to reprogram funds or request emergency spending from the central government. He also said that unplanned cuts of this magnitude put major programs at risk, in particular programs aimed at providing the Colombians with air mobility capabilities needed to support drug reduction, enhanced state presence, and a range of social and economic programs. Both U.S. and Colombian officials are working on a detailed nationalization agreement that would outline next steps, transition plans, key players and responsibilities, and potential funding sources. In line with this objective, the Colombians have formed an Office of Special Projects to head up all nationalization efforts involving the Ministry of Defense. The office Director told us that, while all prior attempts at nationalization planning have not been implemented, the government of Colombia has begun a serious effort to plan for nationalization. According to the Director, this effort includes (1) developing an inventory of all U.S. assistance provided to Colombia in order to identify potential candidates for nationalization, (2) prioritizing the list and working with the Ministry of Financing and the National Planning Department to ensure that adequate funds will be made available to finance these priority items, and (3) discussing the prioritized list with U.S. representatives. Despite an improving economy and growth in public-sector resources, the Colombians have issued a call for international assistance to help fund a portion of PCCP from 2007 through 2013 noting that even a “single year without international support would force a retreat on the important advances that have been made so far.” The call for assistance is similar to that issued by the Colombians at the outset of Plan Colombia, when internal security concerns and poor economic conditions limited the Colombian government’s ability to fund its counternarcotics and counterterrorism objectives. The PCCP plan calls for spending by Colombia to total almost $44 billion from 2007 through 2013, with $6 billion of this total devoted to counternarcotics and counterterrorism operations and the balance devoted to social, economic, and rule of law efforts. When Plan Colombia was first announced in 1999, a combination of domestic and foreign events limited Colombia’s economic growth and its ability to fully fund the costs of its plan. As noted in a November 2007 assessment by the Center for Strategic and International Studies (CSIS), Colombia’s financial system experienced a period of stress, during the late 1990s, characterized by the failure of several banks and other financial institutions, as well as by the severe deterioration of the system’s financial health. The situation was exacerbated by violent conflict and, in 1999, the country’s gross domestic product fell by 4.2 percent, the first contraction in output since the 1930s. In 2003, we reported that Colombia’s ability to provide additional funding to sustain the counternarcotics programs without a greatly improved economy was limited. Improvements in Colombia’s security environment and economy have allowed the government to significantly increase spending levels in a number of areas. Colombia’s $130 billion economy grew at 6.8 percent in 2006, the highest rate in 28 years and two points faster than the Latin American average. Colombia has reduced its inflation rate from 16.7 percent in 1998 to 4.5 percent in 2006. According to the CSIS report, Colombia has improved its economy through a combination of fiscal reforms, public debt management, reduction of inflation, and strengthening of the financial system—policies that, along with three successive International Monetary Fund arrangements, have placed the country on a path of sustainable growth while reducing poverty and unemployment. While Plan Colombia’s drug reduction goals were not fully met, U.S. assistance has helped stabilize Colombia’s internal security situation by weakening the power of illegal armed groups to hold disputed areas that largely correlate to the major coca growing regions in the country. State anticipates that billions of dollars in additional aid will need to be provided to Colombia through at least 2013 to help achieve a desired end- state where drug, security, social and economic welfare, and civil society problems reach manageable levels. One principal challenge is determining which combination of military and nonmilitary programs will have the greatest affect on combating the drug trade in Colombia. Program activities in the past have relied heavily on the use of aerial spraying as a key tool for driving down coca cultivation levels, and the vast bulk of U.S. counternarcotics assistance has gone to eradication and interdiction efforts. However, coca cultivation reduction goals were not met. As a result, Congress directed a decreased emphasis on aerial eradication, while directing that more be spent on alternative development and in other nonmilitary program areas. However, USAID does not currently measure the effect alternative development has on this goal or the extent to which its programs are self-sustaining. Congress has renewed its call for accelerated nationalization efforts on the part of State and other U.S. agencies operating in Colombia. Both State and Defense are engaged in reducing assistance for military and police programs. USAID and Justice officials agree that sustainable nonmilitary programs will take years to develop, however, both agencies have begun to nationalize some portions of their assistance. While high-level planning for nationalization has taken place and several discrete planning efforts are in place or are under development, U.S. nationalization efforts are not guided by an integrated plan that fully addresses the complex mix of agency programs, differing agency goals, and varying timetables for nationalization. Such a plan should include key milestones and future funding requirements that take into account the government of Colombia’s ability to assume program costs supported by the United States. We recommend that the Secretary of State, in conjunction with the Secretary of Defense, Attorney General, and Administrator of USAID, and in coordination with the government of Colombia, develop an integrated nationalization plan that details plans for turning over operational and funding responsibilities for U.S.-supported programs to Colombia. This plan should define U.S. roles and responsibilities for all U.S.-supported military and non-military programs. Other key plan elements should include future funding requirements; a detailed assessment of Colombia’s fiscal situation, spending priorities, and ability to assume additional funding responsibilities; and specific milestones for completing the transition to the Colombians. We also recommend that the Director of Foreign Assistance and Administrator of USAID develop performance measurements that will help USAID (1) assess whether alternative development assistance is reducing the production of illicit narcotics, and (2) determine to what extent the agency’s alternative development projects are self-sustaining. We provided a draft of this report to the departments of Defense, Homeland Security, Justice, and State; ONDCP; and USAID for their comments. Defense, State, ONDCP, and USAID provided written comments, which are reproduced in appendixes IV through VII. All except Homeland Security provided technical comments and updates, which we incorporated in the report, as appropriate. In commenting on our recommendation to the Secretary of State, State agreed that it should continue to improve the coordination of nationalization efforts among Defense, other executive branch agencies, and the government of Colombia. State noted that its annual multiyear strategy report (which it first provided to Congress in 2006) offers the most useful format to address our recommendation. While State’s annual report is useful, it does not incorporate and rationalize the complex mix of agency programs, funding plans and schedules, differing agency goals, and varying timetables for nationalization as we recommend. State did not address how it intends to address these more detailed elements with Defense, Justice, and USAID. We continue to believe that an integrated plan addressing these elements would benefit the interagency and the Congress alike, as future assistance for Colombia is considered. In commenting on our recommendation to the Administrator of USAID, USAID stated that the measures it has are sufficient to gauge progress towards its strategic goals. However, USAID went on to say that better measures/indicators to assess alternative development projects could be developed. The USAID mission in Colombia noted that it is working with the USAID missions in Bolivia and Peru, which also manage alternative development programs, to identify new indicators to help measure progress. The USAID/Colombia mission also stated that USAID/Washington should lead an effort, in conjunction with the field and other interested agencies, to develop common indicators that would enhance USAID’s ability to measure alternative development performance. We concur. In making our recommendation, we concluded that USAID’s measures were largely output indicators that did not directly address reducing illicit drug activities or the long-term sustainability of USAID’s efforts. An overall review such as USAID/Colombia suggests may help address this shortcoming. ONDCP and State commented that our draft report left the impression that little or no progress had been made with regards to Plan Colombia’s counternarcotics goal. In response, we modified the report title and specific references in the report to better reflect that some progress was made; primarily, opium poppy cultivation and heroin production were reduced by about 50 percent. However, coca cultivation and cocaine production have been the focus of Colombian and U.S. drug reduction efforts since 2000. Neither was reduced; rather, both coca cultivation and cocaine production rose from 2000 to 2006. However, at ONDCP’s suggestion, we added current information that suggests cocaine productivity (cocaine yield per hectare of coca) in Colombia has declined in recent years. Finally, ONDCP also commented that the report did not adequately address the full range of program goals associated with Plan Colombia and the progress made towards achieving these goals. We disagree. In characterizing and summarizing Plan Colombia’s goals and U.S. programs, we reviewed reports prepared by State as well as our prior reports, and discussed the goals and associated programs with U.S. officials both in Washington, D.C., and the U.S. Embassy in Bogotá, and with numerous government of Colombia officials. We addressed U.S. assistance provided for nine specific Colombian military and National Police programs to increase their operational capacity, as well as numerous State, Justice, and USAID efforts to promote social and economic justice, including alternative development, and to promote the rule of law, including judicial reform and capacity building. We also note that State, USAID, and Defense did not raise similar concerns. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Defense and State; the Attorney General; the Director of Foreign Assistance and USAID Administrator; and the Director of ONDCP. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. We examined U.S. assistance efforts since 2000 when funding for Plan Colombia was first approved. Specifically, we examined (1) the progress made toward Plan Colombia’s drug reduction and enhanced security objectives; (2) program support provided to the Colombian military and National Police, including specific results and related challenges; (3) nonmilitary program support provided to Colombia, including specific results and related challenges; and (4) the status of U.S. and Colombian efforts to nationalize U.S. assistance and the challenges, if any, these efforts face. To address the progress made toward Plan Colombia’s drug reduction and enhanced security objectives, we reviewed various U.S. and Colombian government reports and met with cognizant officials to discuss trends and the nature of the data. For trends in drug cultivation, production, and flow we relied primarily on U.S. agencies’ information and officials. For trends in security data on government territorial control, homicides, kidnappings, and ridership on Colombian roads, we relied on data reported by the Colombian Ministry of Defense and other Colombian government ministries. To evaluate trends in Colombian drug cultivation and trafficking since calendar year 2000, we reviewed various studies, such as the National Drug Threat Assessment produced each year by the National Drug Intelligence Center. We reviewed various strategy documents produced by the United States that are the basis for overall drug control efforts, such as the Office of National Drug Control Policy’s (ONDCP) annual National Drug Control Strategy, and the Department of State’s (State) annual International Narcotics Control Strategy Report (INCSR). To track changes in coca cultivation and cocaine production trends in Colombia we relied on the Interagency Assessment of Cocaine Movement (IACM), an annual interagency study designed to advise policymakers and resource analysts whose responsibilities include detection, monitoring, and interdicting illegal drug shipments. To track changes in the combined amount of cocaine flowing towards the United States from Bolivia, Colombia, and Peru, we relied on IACM. Because no similar interagency flow assessments are done for heroin, we obtained estimates of production and seizures from State’s INCSR and the National Drug Threat Assessments. To understand how these estimates were developed, we discussed the studies and overall trends in the illicit drug threat from Colombia with officials from the Defense Intelligence Agency in Arlington, Virginia; the Drug Enforcement Administration in Arlington, Virginia; the Central Intelligence Agency’s Crime and Narcotics Center (CNC), Langley, Virginia; the Joint Interagency Task Force-South, Key West, Florida; and the Narcotics Affairs Section and the U.S. Military Group, U.S. Embassy, Bogotá, Colombia. We also met with and discussed these overall trends with Colombian officials in the Ministries of Defense, including the Deputy Minister of Defense. In addition, we compared the patterns and trends for the cultivation, production, and movement of cocaine and the cultivation and production of opium and noted that they were broadly consistent. We determined cultivation, production, and illicit narcotics flow data have some limitations, due in part to the illegal nature of the drug trade and the time lag inherent in collecting meaningful data. With regard to estimates of coca cultivation and cocaine production levels in Colombia, we noted that CNC expanded the number of hectares surveyed for coca cultivation beginning in 2005 in response to concerns that coca farmers were moving their operations to avoid aerial spray operations. Between 2004 and 2006, CNC’s survey area rose from 10.9 million hectares to 23.6 million hectares. This change complicates the process of comparing pre-2005 cultivation levels with later year estimates. In addition, because of methodological concerns, the IACM began reporting in 2004 its estimated flow of cocaine as a range rather than a point estimate. Notwithstanding these limitations, we determined that these data were sufficiently reliable to provide an overall indication of the relative magnitude of, and general trends in, Colombia’s illicit drug trade since 2000. To evaluate security trends, we used data provided primarily by the government of Colombia. To assess its reliability, we interviewed knowledgeable officials at the U.S. Embassy Bogotá and compared general patterns across data sets. We met with and discussed these overall trends with Colombian officials in the Ministries of Defense (including the Deputy Minister of Defense) and Justice (including the Colombian Attorney General). Some civil society representatives expressed concern that Colombian officials may be pressured to present favorable statistics, and that some information may be exaggerated. Nonetheless, U.S. officials, both in Washington, D.C., and Bogotá expressed confidence that the data illustrate overall trends that are widely accepted as accurate. U.S. officials added that while specific checks on the validity of these data are not conducted, data provided by Colombia are consistent with independent U.S. Embassy Bogotá reporting on Colombia’s political, military, and economic environment. As a result, we determined that the data were sufficiently reliable to indicate general trends in government territorial control, homicides, kidnappings, and ridership between 2000 and 2006. To assess program support provided to the Colombian military and National Police since 2000, including results and related challenges, we reviewed and analyzed congressional budget presentations, program and project status reports, our prior reports, and related information. We also reviewed program and budgetary data from the various departments and agencies in Washington, D.C., that manage these programs and met with officials responsible for these programs, including officials from State and Defense, as well as the Office of National Drug Control Policy. We met with cognizant U.S. officials at the U.S. Southern Command headquarters, Miami, Florida; State’s Office of Aviation Programs headquarters, Patrick Air Force Base, Florida; and the Joint Interagency Task Force-South, Key West, Florida. At the U.S. Embassy in Bogotá, Colombia, we met with U.S. officials with the Narcotics Affairs Section, the U.S. Military Group, and the Drug Enforcement Administration, as well as U.S.-funded contractor representatives assisting with the Colombian Army Aviation Brigade, the National Police Air Service, and the police aerial eradication program. In Bogotá, we also met with Colombian Ministry of Defense military and police commanders and other officials, including the Deputy Minister of Defense. We visited facilities and met with Colombian Army commanders at the Army’s Aviation Brigade headquarters in Tolemaida, the Counternarcotics Brigade headquarters in Larandia, and Task Force- Omega’s operating base in La Macarena; and Colombian Marine commanders at their operating base in Tumaco. We also visited facilities and met with Colombian National Police commanders and other officials at its main base in Guaymaral (near Bogotá) and a police operating base in Tumaco, where we observed an aerial eradication mission in southern Nariño. To evaluate the reliability of funding and performance data (beyond the drug cultivation, production, and flow, as well as security indicators discussed above) provided by U.S. and Colombian officials, we analyzed relevant U.S. and Colombian data sources and interviewed cognizant officials to determine the basis for reported information. We performed cross-checks of the data by comparing internal and external budget reports (such as State and Defense Congressional Budget Justifications), agency performance reports, and classified information sources. We determined that the cost and performance data provided were sufficiently reliable for the purposes of our report. To assess nonmilitary program support provided since 2000, including results and related challenges, we reviewed our prior reports along with pertinent planning, implementation, strategic, and related documents and met with cognizant U.S. officials at State and Justice and the U.S. Agency for International Development (USAID) in Washington, D.C., and the U.S. Embassy in Bogotá, Colombia. To review the progress of alternative development programs, we met USAID officials and contractors in Washington, D.C., and in Colombia. We reviewed pertinent planning documentation including USAID strategic plans for 2000-2005 and 2006- 2007, as well as progress reports produced by USAID’s prime contractor. We observed alternative development programs in the departments of Bolivar, Huila, Popayán, and Santander. To review efforts on internally displaced persons and demobilization, we met with officials from USAID, Justice, and State’s Population Refugee and Migration Bureau in Washington, D.C., and in Colombia. We interviewed government of Colombia officials from Acción Social, the National Commission on Reconciliation and Reparations, the Ministry of Interior and Justice, the Fiscalia, the Superior Council for the Judiciary, Inspector General’s Office, the Public Defenders Directorate, the Ministry of Agriculture, and the Ministry of Labor and Social Protection. We also met with the High Commissioner for Reconciliation and Reintegration in Colombia, and with civil society and private-sector representatives both in Washington, D.C., and Colombia regarding human rights issues. We observed programs in the cities of Bogotá, Cartagena, and Medellin. To evaluate the reliability of funding and performance data provided by U.S. and Colombian officials, we analyzed relevant U.S. and Colombian data sources and interviewed cognizant officials to determine the basis for reported information. We performed cross-checks of provided data against internal agency budget documents and external U.S. budget reports (such as State, USAID, and Justice Congressional Budget Justifications), agency performance reports, and Colombian reports and studies. We determined that the cost data provided by U.S. agencies was sufficiently reliable for our purposes. We did note certain limitations with regard to the performance data we received from U.S. agencies. Because of the difficult security situation in Colombia, U.S. agencies must often rely on third parties to document performance data. In particular, the USAID Office of Inspector General raised some concerns in May 2007 regarding the consistency with which alternative development performance goals had been defined, but was nevertheless able to use the data to determine whether overall goals had been met. Consequently, we determined that the data on families that have benefited from alternative development assistance, infrastructure projects completed, hectares of licit agricultural crops developed, and private- sector funds leveraged by USAID activities were sufficiently reliable to allow for broad comparisons of actual performance in 2007 against the goals that had been set, but that these data could not be used for very precise comparisons. To determine the status of U.S. and Colombian efforts to nationalize U.S. assistance, we reviewed planning and strategic documents related to nationalization, including a memorandum of understanding between the United States and Colombia regarding the transfer of programs. We met with State and Defense officials in Washington, D.C.; State’s Office of Aviation Programs at Patrick Air Force Base; and U.S. Southern Command in Florida. We met with a special consultant to State who was conducting a strategic review of State programs in Colombia. In Colombia, we met with designated U.S. Embassy Bogotá officials responsible for managing U.S. nationalization efforts, along with an ambassador appointed by State to lead negotiations with Colombia regarding current and planned steps in the nationalization process. We discussed the implications of nationalization with Colombian government officials from the National Planning Department, the Ministry of Defense (in particular, the Office of Special Projects charged with leading the ministry’s nationalization efforts), the Colombian Army and National Police, the Ministry of Interior and Justice, and AcciÓn Social. Finally, the information and observations on foreign law in this report do not reflect our independent legal analysis but are based on interviews with cognizant officials and secondary sources. State and Defense officials told us that the Army Aviation Brigade has been provided with essential support services needed to manage a modern combat aviation service, including infrastructure and maintenance support; contract pilots and mechanics; assistance to train pilots and mechanics; flight planning, safety, and quality control standards and procedures; and a logistics system. Table 5 describes these support services in more detail. Similar to the Army Aviation Brigade, State has provided key program support elements to the Colombian National Police’s Air Service. These elements include contract mechanics; mechanics training; the construction of helipads and hangers; and funding for spare parts, fuel, and other expenses. Table 6 describes these support services in more detail. As illustrated in figure 19, the estimated number of hectares of coca under cultivation in Bolivia, Colombia, and Peru has varied since 2000 from an estimated 187,500 hectares to 233,000 hectares in 2007, and averaged about 200,000 hectares since 2000. As noted in our report, these changes were due, at least in part, to the Crime and Narcotics Center’s decision to increase the size of the coca cultivation survey areas in Colombia from 2004 to 2006. The U.S. interagency counternarcotics community uses the number of hectares of coca under cultivation to help estimate the amount of 100 percent pure cocaine that can be produced in each country. Essentially, the community calculates production efficiency rates for turning coca leaf into cocaine and applies it to the total number of hectares under cultivation. As illustrated in figure 20, the total amount of estimated pure cocaine produced in Bolivia, Colombia, and Peru has fluctuated since 2000 but has risen from 770 metric tons in 2000 to 865 metric tons in 2007, and averaged about 860 metric tons per year since 2000. In 2008, the interagency counternarcotics community reduced Colombia’s estimated cocaine production efficiency rate for the years 2003 through 2007. The community attributed the reduced efficiency to Colombia’s efforts to eradicate coca. However, according to Drug Enforcement Administration officials, the interagency had also raised the production efficiency rate in Peru for 2002 through 2005 due to better processing techniques, which offset much of the reduction in Colombia. The Drug Enforcement Administration also noted that it has not reassessed the cocaine production efficiency rate in Bolivia since 1993, but expects that Bolivia has improved its processing techniques and is producing more pure cocaine than the interagency has estimated. Following are GAO’s comments on the Department of Defense’s comment letter dated September 17, 2008. 1. The transfer of these assets was not highlighted as a significant example of nationalization during the course of our review when we met with Defense officials in Washington, D.C., or the U.S. Military Group in Bogotá. Nonetheless, we added a statement to report progress in this area. 2. We incorporated Defense’s observation that the Strategic Partner Transition Plan addresses both Foreign Military Financing and Defense counternarcotics funding. As noted in our report, however, State’s Political-Military Bureau declined to provide us a copy of the plan until it is formally released to Congress. As a result, we were not able to independently assess the plan’s content and scope. Following are GAO’s comments on the State Department’s comment letter dated September 17, 2008. 1. We included additional information on coca cultivation and cocaine production patterns in the final report. We also note that 2007 coca cultivation and cocaine production data did not become available until after this report was released for agency comments, and we have added it, as appropriate. Following are GAO’s comments on the Office of National Drug Control Policy’s comment letter dated September 17, 2008. 1. We disagree. In characterizing and summarizing Plan Colombia’s goals and U.S. programs, we reviewed reports prepared by State as well as our prior reports, and discussed the goals and associated programs with U.S. officials both in Washington, D.C., and the U.S. Embassy in Bogotá, and with numerous government of Colombia officials. We addressed U.S. assistance provided for nine specific Colombian military and National Police programs to increase their operational capacity, as well as, numerous State, Justice, and USAID efforts to promote social and economic justice, including alternative development, and to promote the rule of law, including judicial reform and capacity building. We also note that State, USAID, and Defense did not raise similar concerns. 2. The drop in potential cocaine production that ONDCP cites compares 2001 (when coca cultivation and production peaked) to 2007. Our report compares 2000 (when U.S. funding for Plan Colombia was first approved) to 2006 (Plan Colombia’s drug reduction goal was tied to 6- year time period). We also note that 2007 coca cultivation and cocaine production data did not become available until after this report was released for agency comments, and we have added it, as appropriate. Following are GAO’s comments on the U. S. Agency for International Development’s comment letter dated September 11, 2008. 1. We modified the report to note USAID has initiated nationalization efforts for each of its major program areas and several major projects. However, we note that USAID’s nationalization efforts are program and project specific and are not integrated with the range of other U.S. government efforts, as we recommend should be done. 2. We believe we fairly characterized USAID’s assistance role in the counternarcotics strategy for Colombia. However, we did not intend to imply that USAID alternative development programs are social programs. We intended to note that USAID’s assistance supports social infrastructure, such as schools and other community projects. We clarified the text where appropriate. 3. We only intended to note that most coca growing areas do not receive USAID assistance for various reasons, including restrictions by the government of Colombia. USAID resources are scarce and must be deployed to the areas most likely to achieve sustainable results. We added text to note that the majority of the Colombian population lives within the geographic areas where USAID operates. However, the fact that the majority of coca is cultivated outside of USAID’s economic corridors poses challenges for USAID’s strategic goal of reducing the production of illegal drugs. 4. We endorse and commend USAID/Colombia’s attempt to work at both the mission level and with USAID/Washington to develop common indicators which would enhance USAID’s ability to assess the performance of alternative development projects. 5. We recognize key indicators such as increased gross market value and number of families benefited are useful in determining the impact of USAID programs at a family or farm level. However, these indicators do not measure the sustainability of the projects, such as whether families or businesses have continued in legal productive activities after USAID assistance has ended. 6. We agree that outside support for USAID alternative development projects is a key component of creating self-sustaining projects. However, this point does not address the fact that USAID does not currently collect and report data on whether USAID supported activities continue after its involvement ends. In addition to the above named individual, A.H. Huntington, III, Assistant Director; Joseph Carney, Jonathan Fremont, Emily Gupta, Jose Peña, an Michael ten Kate made key contributions to this report. Technical assistance was provided by Joyce Evans, Jena Sinkfield, and Cynthia Taylor. Drug Control: Cooperation with Many Major Drug Transit Countries Has Improved, but Better Performance Reporting and Sustainability Plans Are Needed. GAO-08-784. Washington, D.C.: July 15, 2008. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotics Efforts, but the Flow of Illicit Drugs into the United States Remains High. GAO-08-215T. Washington, D.C.: October 25, 2007. Drug Control: U.S. Assistance Has Helped Mexican Counternarcotics Efforts, but Tons of Illicit Drugs Continue to Flow into the United States, GAO-07-1018. Washington, D.C.: August 17, 2007. State Department: State Has Initiated a More Systematic Approach for Managing Its Aviation Fleet. GAO-07-264. Washington, D.C.: February 2, 2007. Drug Control: Agencies Need to Plan for Likely Declines in Drug Interdiction Assets, and Develop Better Performance Measures for Transit Zone Operations. GAO-06-200. Washington, D.C.: November 15, 2005. Security Assistance: Efforts to Secure Colombia’s Caño Limón-Coveñas Oil Pipeline Have Reduced Attacks, but Challenges Remain. GAO-05-971. Washington, D.C.: September 6, 2005. Drug Control: Air Bridge Denial Program in Colombia Has Implemented New Safeguards, but Its Effect on Drug Trafficking Is Not Clear. GAO-05-970. Washington, D.C.: September 6, 2005. Drug Control: U.S. Nonmilitary Assistance to Colombia Is Beginning to Show Intended Results, but Programs Are Not Readily Sustainable. GAO-04-726. Washington, D.C.: July 2, 2004. Drug Control: Aviation Program Safety Concerns in Colombia Are Being Addressed, but State’s Planning and Budgeting Process Can Be Improved. GAO-04-918. Washington, D.C.: July 29, 2004. Drug Control: Specific Performance Measures and Long-Term Costs for U.S. Programs in Colombia Have Not Been Developed. GAO-03-783. Washington, D.C.: June 16, 2003. Drug Control: Financial and Management Challenges Continue to Complicate Efforts to Reduce Illicit Drug Activities in Colombia. GAO-03-820T. Washington, D.C.: June 3, 2003. Drug Control: Coca Cultivation and Eradication Estimates in Colomb GAO-03-319R. Washington, D.C.: January 8, 2003. ia. Drug Control: Efforts to Develop Alternatives to Cultivating Illicit Crops in Colombia Have Made Little Progress and Face Serious Obstacles. GAO-02-291. Washington, D.C.: February 8, 2002. Drug Control: Difficulties in Measuring Costs and Results of Transit Zone Interdiction Efforts. GAO-02-13. Washington, D.C.: January 25, 2002. Drug Control: State Department Provides Required Aviation Program Support, but Safety and Security Should Be Enhanced. GAO-01-1021 Washington, D.C.: September 14, 2001. Drug Control: U.S. Assistance to Colombia Will Take Years to Produce Results. GAO-01-26. Washington, D.C.: October 17, 2000. Drug Control: Challenges in Implementing Plan Colombia. GAO-01-76T. Washington, D.C.: October 12, 2000. Drug Control: U.S. Efforts in Latin America and the Caribbean. GAO/NSIAD-00-90R. Washington, D.C.: February 18, 2000. | In September 1999, the government of Colombia announced a strategy, known as "Plan Colombia," to (1) reduce the production of illicit drugs (primarily cocaine) by 50 percent in 6 years and (2) improve security in Colombia by re-claiming control of areas held by illegal armed groups. Since fiscal year 2000, the United States has provided over $6 billion to support Plan Colombia. The Departments of State, Defense, and Justice and the U.S. Agency for International Development (USAID) manage the assistance. GAO examined (1) the progress made toward Plan Colombia's drug reduction and enhanced security objectives, (2) the results of U.S. aid for the military and police, (3) the results of U.S. aid for non-military programs, and (4) the status of efforts to "nationalize" or transfer operations and funding responsibilities for U.S.-supported programs to Colombia. Plan Colombia's goal of reducing the cultivation, processing, and distribution of illegal narcotics by 50 percent in 6 years was not fully achieved. From 2000 to 2006, opium poppy cultivation and heroin production declined about 50 percent, while coca cultivation and cocaine production levels increased by about 15 and 4 percent, respectively. These increases, in part, can be explained by measures taken by coca farmers to counter U.S. and Colombian eradication efforts. Colombia has improved its security climate through systematic military and police engagements with illegal armed groups and by degrading these groups' finances. U.S. Embassy Bogot? officials cautioned that these security gains will not be irreversible until illegal armed groups can no longer threaten the stability of the government of Colombia, but become a law enforcement problem requiring only police attention. Since fiscal year 2000, State and Defense provided nearly $4.9 billion to the Colombian military and National Police. Notably, over 130 U.S.-funded helicopters have provided the air mobility needed to rapidly move Colombian counternarcotics and counterinsurgency forces. U.S. advisors, training, equipment, and intelligence assistance have also helped professionalize Colombia's military and police forces, which have recorded a number of achievements including the aerial and manual eradication of hundreds of thousands of hectares of coca, the seizure of tons of cocaine, and the capture or killing of a number of illegal armed group leaders and thousands of combatants. However, these efforts face several challenges, including countermeasures taken by coca farmers to combat U.S. and Colombian eradication efforts. Since fiscal year 2000, State, Justice, and USAID have provided nearly $1.3 billion for a wide range of social, economic, and justice sector programs. These programs have had a range of accomplishments, including aiding internally displaced persons and reforming Colombia's justice sector. But some efforts have been slow in achieving their objectives while others are difficult to assess. For example, the largest share of U.S. non-military assistance has gone towards alternative development, which has provided hundreds of thousands of Colombians legal economic alternatives to the illicit drug trade. But, alternative development is not provided in most areas where coca is cultivated and USAID does not assess how such programs relate to its strategic goals of reducing the production of illicit drugs or achieving sustainable results. In response to congressional direction in 2005 and budget cuts in fiscal year 2008, State and the other U.S. departments and agencies have accelerated their nationalization efforts, with State focusing on Colombian military and National Police aviation programs. One aviation program has been nationalized and two are in transition, with the largest--the Army Aviation Brigade--slated for turnover by 2012. Two National Police aviation programs have no turnover dates established. State, Defense, Justice, and USAID each have their own approaches to nationalization, with different timelines and objectives that have not been coordinated to promote potential efficiencies. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
SSI provides financial assistance to people who are age 65 or older, blind or disabled, and who have limited income and resources. The program provides individuals with monthly cash payments to meet basic needs for food, clothing, and shelter. Last year, about 6.8 million recipients were paid about $33 billion in SSI benefits. During the application process, SSA relies on state Disability Determination Services to make the initial medical determination of eligibility while SSA field offices are responsible for determining whether applicants meet the program’s nonmedical (age and financial) eligibility requirements. To receive SSI benefits in 2002, individuals may not have income greater than $545 per month ($817 for a couple) or have resources worth more than $2,000 ($3,000 for a couple). When applying for SSI, individuals are required to report any information that may affect their eligibility for benefits. Similarly, once individuals receive SSI benefits, they are required to report events, such as changes in income, resources, marital status, or living arrangements to SSA field office staff in a timely manner. A recipient’s living arrangement can also affect monthly benefits. Generally, individuals who rent, own their home, or pay their share of household expenses if they live with other persons receive a higher monthly benefit than those who live in the household of another person and receive food and shelter assistance. To a significant extent, SSA depends on program applicants and recipients to accurately report important eligibility information. However, to verify this information SSA uses computer matches to compare SSI records against recipient information contained in records of third parties, such as other federal and state government agencies. To determine whether recipients remain financially eligible for SSI benefits after the initial assessment, SSA also periodically conducts redetermination reviews to verify eligibility factors such as income, resources, and living arrangements. Recipients are reviewed at least every 6 years, but reviews may be more frequent if SSA determines that changes in eligibility are likely. Since its inception, the SSI program has been difficult and costly to administer because even small changes in monthly income, available resources, or living arrangements can affect benefit amounts and eligibility. Complicated policies and procedures determine how to treat various types of income, resources, and in-kind support and maintenance that a recipient receives. SSA must constantly monitor these situations to ensure benefit amounts are paid accurately. On the basis of our work, which spans more than a decade, we designated SSI a high-risk program in 1997 and initiated work to document the underlying causes of longstanding SSI program problems and the impact these problems have had on program performance and integrity. In 1998, we reported on a variety of management problems related to the deterrence, detection, and recovery of SSI overpayments. Over the last several years, we also testified about SSA’s progress in addressing these issues (see app. I). Since 1998, SSA has demonstrated a stronger management commitment to SSI program integrity issues. SSA has also expanded the use of independent data to verify eligibility factors and enhanced its ability to detect payment errors. Today, SSA has far better capability to more accurately verify program eligibility and detect payment errors than it did several years ago. However, weaknesses remain in its debt prevention and deterrence processes. SSA has made limited progress toward simplifying complex program rules that contribute to payment errors and is not fully utilizing several overpayment prevention tools, such as penalties and the suspension of benefits for recipients who fail to report eligibility information as required. Since our 1998 report, SSA has taken a variety of actions that demonstrate a fundamental change in its management approach and a much stronger commitment to improved program integrity. First, SSA issued a report in 1998 that outlined its strategy for strengthening its SSI stewardship role.This report highlighted specific planned initiatives to improve program integrity and included timeframes for implementation. In addition to developing a written SSI program integrity strategy, SSA submitted proposals to Congress requesting new authorities and tools to implement its strategy. In December 1999, Congress provided SSA with several newly requested tools in the Foster Care Independence Act of 1999. The act gave SSA new authorities to deter fraudulent or abusive actions, better detect changes in recipient income and financial resources, and improve its ability to recover overpayments. Of particular note is a provision in the act that strengthened SSA’s authority to obtain applicant resource information from banks and other financial institutions. SSA’s data show that unreported financial resources, such as bank accounts, are the second largest source of SSI overpayments. SSA also sought and received separate legislative authority to penalize persons who misrepresent material facts essential to determining benefit eligibility and payment amounts. SSA can now impose a period of benefit ineligibility ranging from 6 to 24 months for individuals who knowingly misrepresent facts. SSA also made improved program integrity one of its five agency strategic goals and established specific objectives and performance indicators to track its progress towards meeting this goal. For example, the agency began requiring its field offices to complete 99 percent of their assigned redetermination reviews and other cases where computer matching identified a potential overpayment situation due to unreported wages, changes in living arrangements, or other factors. During our review, most field staff and managers that we interviewed told us that SSA’s efforts to establish more aggressive goals and monitor performance toward completing these reviews was a clear indication of the new enhanced priority it now places on ensuring timely investigation of potential SSI overpayments. To further increase staff attention to program integrity issues, SSA also revised its work measurement system—used for estimating resource needs, gauging productivity, and justifying staffing levels—to include staff time spent developing information for referrals to its Office of Inspector General (OIG). In prior work, we reported that SSA’s own studies showed that its employees felt pressured to spend most of their time on “countable” workloads, such as quickly processing and paying claims rather than on developing fraud referrals for which they received no credit. Consistent with this new emphasis, the OIG also increased the level of resources and staff devoted to investigating SSI fraud and abuse; key among the OIG’s efforts is the formation of Cooperative Disability Investigation (CDI) teams in 13 field locations. These teams consist of OIG investigators, SSA staff, state or local law enforcement officers, and state DDS staff who investigate suspicious medical claims through surveillance and other techniques. A key focus of the CDI initiative is detecting fraud and abuse earlier in the disability determination process to prevent overpayments from occurring. The OIG reported that the teams saved almost $53 million in fiscal year 2001 in improper benefit payments by providing information that led to a denial of a claim or the cessation of benefits. Finally, in a June 2002 corrective action plan, SSA reaffirmed its commitment to taking actions to facilitate the removal of the SSI program from our high-risk list. This document described SSA’s progress in addressing many of the program integrity vulnerabilities we identified and detailed management’s SSI program priorities through 2005. To ensure effective implementation of this plan, SSA has assigned senior managers responsibility for overseeing key initiatives, such as piloting new quality assurance systems. The report also highlighted several other program integrity initiatives under consideration by SSA, including plans to test whether touchtone telephone technology can improve the reporting of wages, credit bureau data can be used to detect underreported income, and public databases can help staff identify unreported resources, for example, automobiles and real property. To assist field staff in verifying the identity of recipients, SSA is also exploring the feasibility of requiring new SSI claimants to be photographed as a condition of receiving benefits. In prior work, we noted that SSA’s processes and procedures for verifying recipients’ income, resources, and living arrangements were often untimely and incomplete. In response to our recommendations, SSA has taken numerous actions to verify recipient reported information and better detect and prevent SSI payment errors. SSA has made several automation improvements to help field managers and staff better control overpayments. For example, last year, the agency distributed software nationwide that automatically scans multiple internal and external databases containing recipient financial and employment information and identifies potential changes in income and resources. The system then generates a consolidated report for use by staff when interviewing recipients. SSA also made systems enhancements to better identify newly entitled recipients with uncollected overpayments from a prior coverage period. Previously, each time an individual came on and off the rolls over a period of years, staff had to search prior SSA records and make system inputs to bring forward any outstanding overpayments to current records. The process of detecting overpayments from a prior eligibility period and updating recipient records now occurs automatically. SSA’s data show that, since this tool was implemented in 1999, the monthly amount of outstanding overpayments transferred to current records increased on average by nearly 200 percent, from $12.9 million a month to more than $36 million per month. Thus, a substantial amount of outstanding overpayments that SSA might not have detected under prior processes is now subject to collection action. Nearly all SSA staff and managers that we interviewed told us that systems enhancements have improved SSA’s ability to control overpayments. In commenting on this report, SSA said that it will soon implement another systems enhancement to improve its overpayment processes. SSA will automatically net any overpayments against underpayments that exist on a recipient’s record before taking any recovery or reimbursement actions. Presently, netting requires SSA employees to record a series of transactions and many opportunities to recover overpayments by netting them against existing underpayments are lost. SSA estimates that automating the netting process will reduce overpayments by up to $60 million each year, with a corresponding reduction in underpayments paid to beneficiaries. In addition to systems and software upgrades, SSA now uses more timely and comprehensive data to identify information that can affect SSI eligibility and benefit amounts. For example, in accordance with our prior recommendation, SSA obtained access to the Office of Child Support Enforcement’s National Directory of New Hires (NDNH), which is a comprehensive source of unemployment insurance, wage, and new hires data for the nation. In January 2001, SSA began providing field offices with direct access to NDNH and required its use to verify applicant eligibility during the initial claims process. With NDNH, SSA field staff now have access to more comprehensive and timely employment and wage information essential to verifying factors affecting SSI eligibility. More timely employment and wage information is particularly important, considering that SSA studies show that unreported compensation accounts for about 25 percent of annual SSI overpayments. SSA has estimated that use of NDNH will result in about $200 million in overpayment preventions and recoveries per year. Beyond obtaining more effective eligibility verification tools such as NDNH, SSA has also enhanced existing computer data matches to verify financial eligibility. For example, SSA increased the frequency (from annually to semiannually) in which it matches SSI recipient social security numbers (SSN) against its master earnings record, which contains information on the earnings of all social security-covered workers. In 2001, SSA flagged over 206,000 cases for investigation of unreported earnings, a threefold increase over 1997 levels. To better detect individuals receiving unemployment insurance benefits, quarterly matches against state unemployment insurance databases have replaced annual matches. Accordingly, the number of unemployment insurance detections has increased from 10,400 in 1997 to over 19,000 last year. SSA’s ability to detect nursing home admissions, which can affect SSI eligibility, has also improved. In 1997, we reported that SSA’s database for identifying SSI recipients residing in nursing homes was incomplete and its verification processes were untimely, resulting in substantial overpayments. At the time, this database included only 28 states and data matches were conducted annually. SSA now conducts monthly matches with all states, and the number of overpayment detections related to nursing home admissions has increased substantially from 2,700 in 1997 to 75,000 in 2001. SSA’s ability to detect recipients residing in prisons has also improved. Over the past several years, SSA has established agreements with prisons that house 99 percent of the inmate population, and last year SSA reported suspending benefits to about 54,000 prisoners. Recipients are ineligible for benefits in any given month if throughout that month they are in prison. SSA has also increased the frequency in which it matches recipient SSNs against tax records and other data essential to identify any unreported interest, income, dividends, and pension income individuals may be receiving. These matching efforts have also resulted in thousands of additional overpayment detections over the last few years. To obtain more current information on the income and resources of SSI recipients, SSA has also increased its use of online access to various state data. Field staff can directly query various state records to quickly identify workers’ compensation, unemployment insurance, or other state benefits individuals may be receiving. In 1998, SSA had online access to records in 43 agencies in 26 states. As of January 2002, SSA had expanded this access to 73 agencies in 42 states. As a tool for verifying SSI eligibility, direct online connections are potentially more effective than using periodic computer matches, because the information is more timely. Thus, SSA staff can quickly identify potential disqualifying income or resources at the time of application and before overpayments occur. In many instances, this allows the agency to avoid having to go through the often difficult and unsuccessful task of having to recover overpaid SSI benefits. During our field visits, staff and managers who had online access to state databases believed this tool was essential to more timely verification of recipient-reported information. SSA’s efforts to expand direct access to additional states’ data are ongoing. Finally, to further strengthen program integrity, SSA took steps to improve its SSI financial redetermination review process to verify that individuals remain eligible for benefits. First, SSA increased the number of annual reviews from 1.8 million in fiscal year 1997 to 2.4 million in 2001. Second, SSA substantially increased the number of redeterminations conducted through personal contact with recipients, from 237,000 in 1997 to almost 700,000 this year. SSA personally contacts those recipients that it believes are most likely to have payment errors. Third, because budget constraints limit the number of redeterminations SSA conducts, it refined its profiling methodology in 1998 to better target recipients that are most likely to have payment errors. Refinements in the selection methodology have allowed SSA to leverage its resources. SSA’s data show that, in 1998, refining the case selection methodology increased estimated overpayment benefits— amounts detected and future amounts prevented—by $99 million over the prior year. SSA officials have estimated that conducting substantially more redeterminations would yield hundreds of millions of dollars in additional overpayment benefits annually. However, officials from its Office of Quality Assurance and Performance Assessment indicated that limited resources would affect SSA’s ability to do more reviews and still meet other agency priorities. In June 2002, SSA informed us that the Commissioner recently decided to make an additional $21 million available to increase the number of redeterminiations this year. Despite its increased emphasis on overpayment detection and deterrence, SSA is not meeting its payment accuracy goals and it is too early to determine what impact its actions will ultimately have on its ability to make more accurate benefit payments. In 1998, SSA pledged to increase its SSI overpayment accuracy rate from 93.5 percent to 96 percent by fiscal year 2002. Since that time, however, SSA has revised this goal downward twice and for fiscal year 2001 it was 94.7 percent. Current agency plans do not anticipate achieving the 96-percent accuracy rate until 2005. Various factors may account for SSA’s inability to achieve its SSI accuracy goals, including lag times between the occurrence of an event affecting eligibility and SSA’s receipt of the information. In addition, key initiatives that might improve SSI overpayment accuracy have only recently begun or are in the early planning stages. For example, it was not until January 2001 that SSA began providing field offices with access to the NDNH database to verify applicants’ employment status and wages. SSA also only recently required staff to use NDNH when conducting post entitlement reviews of individuals’ continued eligibility for benefits. In fiscal year 2000, SSA estimated that overpayments attributable to wages—historically the number one source of SSI overpayments—were about $477 million or 22 percent of its payment errors. Thus, with full implementation, the impact of NDNH on overpayment accuracy rates may ultimately be reflected in future years. Furthermore, the Foster Care Independence Act of 1999 strengthened SSA’s authority to obtain applicant resource information from financial institutions. SSA’s data show that unreported financial resources, such as bank accounts, are the second largest source of SSI overpayments. Last year, overpayments attributable to this category totaled about $394 million, or 18 percent of all detections. In May 2002, SSA issued proposed regulations on its new processes for accessing recipient financial data and plans to implement a pilot program later this year. When fully implemented, this tool may also help improve the SSI payment accuracy rate. SSA has made only limited progress toward addressing excessively complex rules for assessing recipients’ living arrangements, which have been a significant and longstanding source of payment errors. SSA staff must apply a complex set of policies to document an individual’s living arrangements and the value of in-kind support and maintenance (ISM) being received, which are essential to determining benefit amounts. Details such as usable cooking and food storage facilities with separate temperature controls, availability of bathing services, and whether a shelter is publicly operated can affect benefits. These policies depend heavily on recipients to accurately report whether they live alone or with others; the relationships involved; the extent to which rent, food, utilities, and other household expenses are shared; and exactly what portion of those expenses an individual pays. Over the life of the program, those policies have become increasingly complex as a result of new legislation, court decisions, and SSA’s own efforts to achieve benefit equity for all recipients. The complexity of SSI program rules pertaining to living arrangements, ISM, and other areas of benefit determination is reflected in the program’s administrative costs. In fiscal year 2001, SSI benefit payments represented about 6 percent of benefits paid under all SSA-administered programs, but the SSI program accounted for 31 percent of the agency’s administrative resources. Although SSA has examined various options for simplifying rules concerning living arrangements and ISM over the last several years, it has yet to take action to implement a cost-effective strategy for change. In December 2000, SSA issued a report examining six potential simplification options for living arrangements and ISM relative to program costs and three program objectives: benefit adequacy (ensuring a minimum level of income to meet basic needs); benefit equity (ensuring that recipients with like income, resources, and living arrangements are treated the same); and program integrity (ensuring that benefits are paid accurately, efficiently, and with no tolerance for fraud). SSA’s report noted that overpayments attributable to living arrangements and ISM in 1999 accounted for a projected $210 million, or 11 percent, of total overpayment dollars. The report also acknowledged that most overpayments were the result of beneficiaries not reporting changes in living arrangements and SSA staff’s failure to comply with complicated instructions for verifying information. SSA concluded that none of the options analyzed supported all of its SSI program goals. As a result, SSA recommended further assessing the tradeoffs among program goals presented by these simplification options. SSA’s study shows that at least two of the options would produce net program savings. For example, one option eliminated the need to determine whether an individual is living in another person’s household by counting ISM at the lesser of its actual value or one-third of the federal benefit rate. In addition to ultimately reducing program costs, SSA noted that this option would eliminate several inequities in current ISM rules and increase benefits for almost 1 percent of recipients. Although SSA cited some disadvantages (such as, additional development/calculations in some cases and decreasing benefits for about 2 percent of recipients), its analysis did not indicate that the disadvantages outweighed potential positive effects. Furthermore, for two other options in which SSA projected a large increase in program costs, it acknowledged that its estimates were based on limited data and were “very rough.” Thus, actual program costs associated with these options could be significantly lower or higher. Finally, to the extent that SSA identified limitations in some options analyzed, such as reductions in benefits for some recipients, it did not propose any modifications or alternatives to address them. SSA’s actions to date do not sufficiently address concerns about complex living arrangement and ISM policies. During our recent fieldwork, staff and managers continued to cite program complexity as a problem leading to payment errors, program abuse, and excessive administrative burdens. In addition, overpayments associated with living arrangements and ISM remain among the leading causes of overpayments behind unreported wages and resources, respectively. Finally, SSA’s fiscal year 2000 payment accuracy report noted that it would be difficult to achieve SSI accuracy goals without some policy simplification initiatives. In its recently issued “SSI Corrective Action Plan,” SSA stated that within the next several years it plans to conduct analyses of alternative program simplification options beyond those already assessed. Our work shows that administrative penalties and sanctions may be underutilized in the SSI program. Under the law, SSA may impose administrative penalties on recipients who do not file timely reports about factors or events that can affect their benefits—changes in wages, resources, living arrangements, and other support being received. An administrative penalty causes a reduction in 1 month’s benefits. Penalty amounts are $25 for a first occurrence, $50 for a second occurrence, and $100 for the third and subsequent occurrences. The penalties are meant to encourage recipients to file accurate and timely reports of information so that SSA can adjust its records to correctly pay benefits. The Foster Care Independence Act also gave SSA authority to impose benefit sanctions on persons who misrepresent material facts that they know, or should have known, were false or misleading. In such circumstances, SSA may suspend benefits for 6 months for the initial violation, 12 months for the second violation, and 24 months for subsequent violations. SSA issued interim regulations to implement these sanction provisions in July 2000 and its November 2000 report cited its implementation as a priority effort to improve SSI program integrity. In our 1998 report, we noted that penalties were rarely used and recommended that SSA reassess its policies for imposing penalties on recipients who fail to report changes that can affect their eligibility. To date, SSA has not addressed our recommendation and staff rarely use penalties to encourage recipient compliance with reporting policies. Over the last several years, SSA data indicate that about 1 million recipients are overpaid annually and that recipient nonreporting of key information accounted for 71 to 76 percent of payment errors. On the basis of SSA records, we estimate that at most about 3,500 recipients were penalized for reporting failures in fiscal year 2001. SSA staff we interviewed cited the same obstacles or impediments to imposing penalties as noted in our 1998 report, such as: (1) penalty amounts are too low to be effective, (2) imposition of penalties is too administratively burdensome, and (3) SSA management does not encourage the use of penalties. SSA has not acted to either evaluate or address these obstacles. Although SSA has issued program guidance to field office staff emphasizing the importance of assessing penalties, this action alone does not sufficiently address the obstacles cited by staff. SSA’s administrative sanction authority also remains rarely used. SSA sanctions data indicate that between June 2000 and February 2002, SSA field office staff had referred about 3,000 SSI cases to the OIG because of concerns about fraudulent activity. In most instances, OIG returned the referred cases to the field office because they did not meet prosecutorial requirements, such as high amounts of benefits erroneously paid. At this point, the field office, in consultation with a regional office sanctions coordinator, can determine whether benefit sanctions are warranted. Cases referred because of concerns about fraudulent behavior would seem to be strong candidates for benefit sanctions. However, as of January 2002, field staff had actually imposed sanctions in only 21 SSI cases. Our interviews with field staff identified insufficient awareness of the new sanction authority and some confusion about when to impose sanctions. In one region, for example, staff and managers told us that they often referred cases to the OIG when fraud was suspected, but it had not occurred to them that these cases should be considered for benefit sanctions if the OIG did not pursue investigation and prosecution. Enhanced communication and education by SSA regarding the appropriate application of this overpayment deterrent tool may ultimately enhance SSA’s program integrity efforts. Over the past several years, SSA has been working to implement new legislative provisions to improve its ability to recover more SSI overpayments. While a number of SSA’s initiatives have yielded results in terms of increased collections, several actions are still in the early planning or implementation stages and it is too soon to gauge what effect they will have on SSI overpayment collections. In addition, we are concerned that SSA’s current overpayment waiver policies and practices may be preventing the collection of millions of dollars in outstanding debt. In our prior work, we reported that SSA has historically placed insufficient emphasis on recovering SSI overpayments, especially for those who have left the rolls. We were particularly concerned that SSA had not adequately pursued authority to use more aggressive debt collection tools already available to other means-tested benefit programs, such as the Food Stamp Program. Accordingly, SSA has taken action over the last several years to strengthen its overpayment recovery processes. SSA began using tax refund offsets in 1998 to recover outstanding SSI debt. At the end of calendar year 2001, this initiative has yielded $221 million in additional overpayment recoveries for the agency. In the same year, Congress authorized a cross program recovery initiative, whereby SSA was provided authority to recover overpayments by reducing the Title II benefits of former SSI recipients without first obtaining their consent. SSA implemented this cross program recovery tool in March 2002. Currently, about 36 percent of SSI recipients also receive Title II benefits, and SSA expects that this initiative will produce about $115 million in additional overpayment collections over the next several years. In 2002, the agency also implemented Foster Care Independence Act provisions allowing SSA to report former recipients with outstanding SSI debt to credit bureaus as well as to the Department of the Treasury. Credit bureau referrals are intended to encourage individuals to voluntarily begin repaying their outstanding debts. The referrals to Treasury will provide SSA with an opportunity to seize other federal benefit payments individuals may be receiving. While overpayment recovery practices have been strengthened, SSA has not yet implemented some key recovery initiatives that have been available to the agency for several years. Although regulations have been drafted, SSA has not yet implemented administrative wage garnishment, which was authorized in the Debt Collection Improvement Act of 1996. In addition, SSA has not implemented several provisions in the Foster Care Independence Act of 1999. These provisions allow SSA to offset the federal salaries of former recipients, use collection agencies to recover overpayments, and levy interest on outstanding overpayments. In its comments, SSA said that it made a conscious decision to implement first those tools that it judged as most cost effective. It prioritized working on debt collection tools that provide direct collections or that could be integrated into its debt management system. According to SSA, the remaining tools are being actively pursued as resources permit. Draft regulations for several of these initiatives are being reviewed internally. However, agency officials said that they could not estimate when these additional recovery tools will be fully operational. Our work shows that SSI overpayment waivers have increased significantly over the last decade and that current waiver policies and practices may cause SSA to unnecessarily forgo millions of dollars in additional overpayment recoveries annually. Waivers are requests by current and former SSI recipients for relief from the obligation to repay SSI benefits to which they were not entitled. Under the law, SSA field staff may waive an SSI overpayment when the recipient is without fault and the collection of the overpayment either defeats the purpose of the program, is against equity and good conscience, or impedes effective and efficient administration of the program. To be deemed without fault, and thus eligible for a waiver, recipients are expected to exercise good faith in reporting information to prevent overpayments. Incorrect statements that recipients know or should have known to be false or failure to furnish material information can result in a waiver denial. If SSA determines a person is without fault in causing the overpayment, it then must determine if one of the other three requirements also exists to grant a waiver. Specifically, SSA staff must determine whether denying a waiver request and recovering the overpayment would defeat the purpose of the program because the affected individual needs all of his/her current income to meet ordinary and necessary living expenses. To determine whether a waiver denial would be against equity and good conscience, SSA staff must decide if an individual incurred additional expenses in relying on the benefit, and thus requiring repayment would affect his/her economic condition. This could apply to recipients who use their SSI benefits to pay for a child’s medical expenses and are subsequently informed of an overpayment. Finally, SSA may grant a waiver when recovery of an overpayment may impede the effective or efficient administration of the program—for example, when the overpayment amount is equal to or less than the average administrative cost of recovering an overpayment, which SSA currently estimates to be $500. Thus, field staff we interviewed generally waived overpayments of $500 or less. The current $500 threshold was established in December 1993. Prior to that time the threshold was $100. Officials told us that this change was based on an internal study of administrative costs related to investigating and processing waiver requests for SSA’s Title II disability and retirement programs. However, the officials acknowledged that the study did not directly examine the costs of granting SSI waivers. Furthermore, they were unable to locate the study for our review and evaluation. During our field visits, staff and managers had varied opinions regarding the time and administrative costs associated with denying waiver requests. However, staff often acknowledged that numerous automation upgrades over the past several years may be cause for re-examining the current costs and benefits associated with the $500 waiver threshold. Our analysis of several years of SSI waiver data shows that since the waiver threshold was adjusted, waived SSI overpayments have increased by 400 percent from $32 million in fiscal year 1993 to $161 million in fiscal year 2001. This increase has significantly outpaced the growth in both the number of SSI recipients served and total annual benefits paid, which increased by 12 percent and 35 percent, respectively, during the same period (see fig. 1). Furthermore, the ratio of waived overpayments to total SSI collections has also increased (see fig. 2). In fiscal 1993, SSA waived about $32 million in SSI overpayments or about 13 percent of its total collections. By 1995, waiver amounts more than doubled to $66 million, or about 20 percent, of collections for that year. By fiscal year 2001, SSI waivers totaled $161 million and represented nearly 23 percent of all SSI collections. Thus, through its waiver process, SSA is forgoing collection action on a significantly larger portion of overpaid benefits. While not conclusive, the data indicate that liberalization of the SSI waiver policy may be a factor in the dramatic increase in the amount of overpayments waived. SSA has not studied the impact of the increased threshold. However, officials believe that the trend in waived SSI overpayments is more likely due to increases in the number of annual reviews of recipients’ medical eligibility. These reviews have resulted in an increase in benefit terminations and subsequent recipient appeals. During the appeals process, recipients have the right to request that their benefits be continued. Those who lose their appeal can then request a waiver of any overpayments that accrued during the appeal period. SSA will usually grant these requests under its current waiver policies. Another factor affecting trends in waivers may be staff application of waiver policies and procedures. Although, SSA has developed guidance to assist field staff when deciding whether to deny or grant waivers, we found that field staff have considerable leeway to grant waivers based on an individual’s claim that he or she reported information to SSA that would have prevented an overpayment. In addition, waivers granted for amounts less than $2,000 are not subject to second-party review while another employee in the office—not necessarily a supervisor—must review those above $2,000. During our field visits, we identified variation among staff in their understanding as to how waiver decisions should be processed, including the extent to which they receive supervisory review and approval. In some offices, review was often minimal or non-existent regardless of the waiver amount, while other offices required stricter peer or supervisory review. In 1999, SSA’s OIG reported that the complex and subjective nature of SSA’s Title II waiver process, as well as clerical errors and misapplication of policies by staff, resulted in SSA incorrectly waiving overpayments in about 9 percent of 26,000 cases it reviewed. The report also noted that 50 percent of the waivers reviewed were unsupported and the OIG could not make a judgment as to the appropriateness of the decision. The OIG estimated that the incorrect and unsupported waivers amounted to nearly $42 million in benefits. While the OIG only examined waivers under the Title II programs and for amounts over $500, the criteria for granting SSI waivers are generally the same. Thus, we are concerned that similar problems with the application of waiver policies could be occurring in the SSI program. SSA has taken a number of steps to address long-standing vulnerabilities in SSI program integrity. SSA’s numerous planned and ongoing initiatives demonstrate management’s commitment to strike a better balance between meeting the needs of SSI recipients and ensuring fiscal accountability for the program. However, it is too early to tell how effective SSA will ultimately be in detecting and preventing overpayments earlier in the eligibility determination process, improving future payment accuracy rates, and recovering a greater proportion of outstanding debt owed to it. Reaching these goals is feasible, provided that SSA sustains and expands the range of SSI program integrity activities currently planned or underway, such as increasing the number of SSI financial redeterminations conducted each year and developing and implementing additional overpayment detection and recovery tools provided in recent legislation. A fundamental cause of SSI overpayments are the complex rules governing SSI eligibility. However, SSA has done little to make the program less complex and error prone, especially in regard to living arrangement policies. We recognize that inherent tensions exist between simplifying program rules, keeping program costs down, and ensuring benefit equity for all recipients. However, longstanding SSI payment errors and high administrative costs suggest the need for SSA to move forward in addressing program design issues and devising cost-effective simplification options. Furthermore, without increased management emphasis and direction on the use of administrative penalties and benefit sanctions, SSA risks continued underutilization of these valuable overpayment deterrence tools. Finally, rapid growth in the amount of overpayments waived over the last several years, suggest that SSA may be unnecessarily forgoing recovery of significant amounts of overpaid benefits. Thus, it is essential that SSA’s policies and procedures for waiving overpayments and staff application of those policies be managed in a way that ensures taxpayer dollars are sufficiently protected. In order to further strengthen SSA’s ability to deter, detect and recover SSI overpayments, we recommend that the Commissioner of Social Security take the following actions: Sustain and expand the range of SSI program integrity activities underway and continue to develop additional tools to improve program operations and management. This would include increasing the number of SSI redeterminations conducted each year and fully implementing the overpayment detection and recovery tools provided in recent legislation. Identify and move forward in implementing cost-effective options for simplifying complex living arrangement and in-kind support and maintenance policies, with particular attention to those policies most vulnerable to fraud, waste, and abuse. An effective implementation strategy may include pilot testing of various options to more accurately assess their ultimate effects. Evaluate current policies for imposing monetary penalties and administrative sanctions and take action to remove any barriers to their usage or effectiveness. Such actions may include informing field staff on when and how these tools should be applied and studying the extent to which more frequent use deters recipient nonreporting. Reexamine policies and procedures for SSI overpayment waivers and make revisions as appropriate. This should include an assessment of the current costs and benefits associated with the $500 waiver threshold and the extent to which staff correctly apply waiver policies. SSA agreed with our recommendations and said that our report would be very helpful in its efforts to better manage the SSI program. It will incorporate the recommendations into its SSI corrective action plan, as appropriate. SSA also assured us that the SSI program is receiving sustained management attention. In this regard, SSA noted that under the current plan it has assigned specific responsibilities to key staff, monitors agency progress, and reviews policy proposals at regularly scheduled monthly meetings chaired by the Deputy Commissioner. While agreeing with each of our recommendations, SSA supplied additional information to emphasize its actions and commitment to improving SSI program integrity. Regarding simplification of complex program rules, SSA said it will continue to assess various program simplification proposals, but it remains concerned about the distributional effects of potential policy changes. SSA also noted that even minor reductions in SSI benefits could significantly affect recipients. Thus, SSA plans to use sophisticated computer simulations to evaluate the potential impacts of various proposals on recipients. We recognize that simplifying the program will not be easy, but it is still a task that SSA needs to accomplish to reduce its vulnerability to payment errors. With regard to its overpayment waiver policies and procedures, SSA agreed to reexamine its current $500 threshold and analyze the extent to which its staff correctly apply waiver policies. SSA also produced data indicating that increases in SSI waivers over the last several years were attributable to the completion of more continuing disability reviews that result in benefit cessation decisions. Consequently, more recipients appeal these decisions and request that their SSI benefits be continued. Recipients can then request waivers of any overpayments that accrued during the appeal period when a cessation decision is upheld. Our report recognizes SSA’s views on the potential cause for increased waivers. However, we also note that SSI overpayment waiver increases may be attributable to inconsistent application of agency waiver policies. SSA also provided additional technical comments that we have incorporated in the report, as appropriate. The entire text of SSA’s comments appears in appendix II. We are sending copies of this report to the House and Senate committees with oversight responsibilities for the Social Security Administration. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please call me or Daniel Bertoni, Assistant Director, on (202) 512-7215. Other major contributors to this report are Barbara Alsip, Gerard Grant, William Staab, Vanessa Taylor, and Mark Trapani. Social Security Administration: Agency Must Position Itself Now to Meet Challenges. GAO-02-289T. Washington, D.C.: May 2, 2002. Social Security Administration: Status of Achieving Key Outcomes and Addressing Major Management Challenges. GAO-01-778. Washington, D.C.: June 15, 2001. High Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. Major Management Challenges and Program Risks: Social Security Administration. GAO-01-261. Washington, D.C.: January 2001. Supplemental Security Income: Additional Actions Needed to Reduce Program Vulnerability to Fraud and Abuse. GAO/HEHS-99-151. Washington, D.C.: September 15, 1999. Supplemental Security Income: Long–Standing Issues Require More Active Management and Program Oversight. GAO/T-HEHS-99-51. Washington, D.C.: February 3, 1999. Major Management Challenges and Program Risks: Social Security Administration. GAO/OCG-99-20. Washington, D.C.: January 1, 1999. Supplemental Security Income: Action Needed on Long-Standing Problems Affecting Program Integrity. GAO/HEHS-98-158. Washington, D.C.: September 14, 1998. High Risk Program: Information on Selected High-Risk Areas. GAO/HR-97-30. Washington, D.C.: May 16, 1997. High Risk Series: An Overview. GAO/HR-97-1. Washington, D.C.: February 1997. | The Supplemental Security Income (SSI) program is the nation's largest cash assistance program for the poor. The program paid $33 billion in benefits to 6.8 million aged, blind, and disabled persons in fiscal year 2001. Benefit eligibility and payment amounts for the SSI population are determined by complex and often difficult to verify financial factors such as an individual's income, resource levels, and living arrangements. Thus, the SSI program tends to be difficult, labor intensive, and time consuming to administer. These factors make the SSI program vulnerable to overpayments. The Social Security Administration (SSA) has demonstrated a stronger commitment to SSI program integrity and taken many actions to better deter and detect overpayments. Specifically, SSA has (1) obtained legislative authority in 1999 to use additional tools to verify recipients' financial eligibility for benefits, including strengthening its ability to access individuals' bank account information; (2) developed additional measures to hold staff accountable for completing assigned SSI workloads and resolving overpayment issues; (3) provided field staff with direct access to state databases to facilitate more timely verification of recipient's wages and unemployment information; and (4) significantly increased, since 1998, the number of eligibility reviews conducted each year to verify recipient's income, resources, and continuing eligibility for benefits. In addition to better detection and deterrence of SSI overpayments, SSA has made recovery of overpaid benefits a high priority. Despite these efforts, further improvements in overpayment recovery are possible. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Many individuals suffering from advanced chronic obstructive pulmonary disease or other respiratory and cardiac conditions are unable to meet their bodies’ oxygen needs through normal breathing. Supplemental oxygen has been shown to assist many of these patients and is considered a life-sustaining therapy. Physicians prescribe the volume of supplemental oxygen required in liters per minute, or liter flow. Medicare covers supplies and equipment necessary to provide supplemental oxygen if the beneficiary has (1) an appropriate diagnosis, such as chronic obstructive pulmonary disease; (2) reduced levels of oxygen in the blood, as documented with clinical tests; and (3) a physician’s certificate of medical necessity that documents that supplemental oxygen is required. There are three methods, or modalities, for the delivery of supplemental oxygen: oxygen concentrators, which are electrically operated machines about the size of a dehumidifier that extract oxygen from room air; liquid oxygen systems, which consist of both large stationary reservoirs and portable units; and compressed gas systems, which use tanks of various sizes, from large stationary cylinders to small portable cylinders. For most patients, each of the three modalities is equally effective for use as a stationary unit, and clinicians indicated that concentrators can meet the stationary oxygen needs of most patients. Oxygen concentrators account for about 89 percent of the stationary systems used by Medicare patients. Liquid oxygen systems account for about 11 percent of the stationary systems used by Medicare patients. Liquid oxygen systems are preferred by many pulmonologists and respiratory therapists for the less than 2 percent of patients who need a high liter flow—defined by Medicare as 4 or more liters of oxygen per minute. Liquid systems are also sometimes preferred by highly mobile patients because patients can refill lightweight portable liquid units directly from their home stationary reservoirs. Liquid oxygen is usually the most expensive modality for many reasons, including the cost of equipment and the need to use specially equipped delivery trucks, adhere to various regulatory requirements, and replenish a patient’s supply on a regular basis. Compressed gas accounts for less than 1 percent of the stationary systems used by Medicare patients. In addition to a stationary unit for use in the home, about 79 percent of Medicare home oxygen patients have portable units that allow them to perform activities away from their stationary unit and outside the home. The most common portable unit is a compressed gas E tank set on a small cart that can be pulled by the user. Pulmonologists and respiratory therapists advise that patients using supplemental oxygen get as much exercise as possible and believe that lightweight portable equipment can facilitate this activity. Such equipment options for active individuals include portable liquid oxygen units and lightweight gas cylinders, which can be carried in a backpack or shoulder bag. A recent technological improvement in the provision of oxygen is the use of conserving devices, which are more efficient in delivering oxygen and therefore maximize the time a lightweight gas cylinder can last. Without a conserving device, very small tanks only last between 1 and 2 hours at a flow rate of 2 liters per minute, making them impracticable for all but short trips away from home. However, not all patients who need lightweight equipment can use conserving devices. Pulmonary clinicians recommend that all patients be tested to ensure they are proper candidates for this technology, since some patients cannot maintain adequate blood oxygen levels when using conserving devices. In 1997, the monthly fee schedule allowance for a stationary oxygen system was about $300, and in 1998 the allowance was reduced to about $225. Medicare pays 80 percent of the allowance, and the patient is responsible for the remaining 20 percent. The Medicare oxygen allowance covers use of the equipment; all refills of gas or liquid oxygen; supplies such as tubing; and services such as equipment delivery and setup, training for patients and caregivers, periodic maintenance, and repairs. The Medicare monthly allowance for a portable unit was about $48 in 1997 and $36 in 1998. Medicare does not pay an additional allowance for a conserving device, but these devices can lower suppliers’ costs by reducing the frequency of deliveries to their patients. Regardless of the type of oxygen system supplied to a patient, Medicare pays a fixed monthly rate. This type of payment system is intended to give suppliers a financial incentive to lower their costs because they can keep the difference between their Medicare payments and their costs. Suppliers can reduce their costs in various ways, including streamlining operations or utilizing new technology to become more efficient, switching patients to less expensive modalities, and reducing the number or type of patient support services. Some of these approaches can reduce costs while maintaining the quality and adequacy of services. Others, however, could potentially compromise the effectiveness of home oxygen therapy for some Medicare beneficiaries. Most suppliers accept Medicare’s allowance as full payment for home oxygen equipment and file claims directly with the Medicare program through a process known as “assignment.” Suppliers do not have to accept assignment, however, and if they do not, there is no limit to the amount they can charge. The businesses that supply home oxygen to Medicare beneficiaries are diverse, varying in size from small companies run by one or two respiratory therapists to large publicly traded corporations with branches throughout the country. Home oxygen suppliers also include hospital affiliates, franchises, and nonprofit corporations. Some suppliers specialize in home oxygen and other respiratory services, others provide various types of medical equipment and services such as home infusion, and still others are part of a full-service pharmacy. Medicare is the single largest payer for home oxygen for most suppliers we met with, except those who specialize in VA and other large-volume contracts. Some states require that home oxygen suppliers be licensed and have respiratory therapists on staff, but others do not. Many suppliers are accredited by the Joint Commission for Accreditation of Healthcare Organizations, but this accreditation is not required by the Medicare program. Preliminary information indicates that access to home oxygen equipment remains largely unchanged, despite the 25-percent Medicare payment reduction that took effect in January 1998. Medicare claims data revealed little change in use patterns during the first 6 months after the January 1998 payment reduction, and virtually all oxygen suppliers continue to accept assignment for home oxygen. Some beneficiaries are expensive or difficult to serve because they live in rural areas served by few providers, require lightweight portable equipment, or require high-liter-flow liquid oxygen systems. These beneficiaries are, therefore, vulnerable to cutbacks by suppliers. Nevertheless, hospital discharge planners we interviewed said they can still arrange appropriate home oxygen equipment for most patients. In addition, we were told that, in general, the limitations on the availability of certain types of equipment that exist now were present before the payment reductions. Also, although there has been about a 6.5-percent decrease in the number of Medicare home oxygen suppliers, most Medicare patients can still choose from among competing firms. The full range of oxygen modalities continues to be available to Medicare beneficiaries, according to the Medicare claims reports, although oxygen concentrators predominate as the system most commonly provided for home oxygen. As the technology of concentrators continues to improve, oxygen concentrators have been slowly replacing stationary liquid systems. This trend is observed in the aggregate data, which show that claims for liquid stationary systems declined by approximately 12 percent between the first half of 1997 and the first half of 1998. During the same period, the use of portable liquid oxygen systems declined by 11 percent, even though the use of portable systems rose overall. (See table 1.) Another indication that home oxygen access has not been impaired is that the oxygen supplier assignment rates for all modalities have remained relatively unchanged since the 1998 payment reduction. In fact, the claims data show that assignment rates for home oxygen increased slightly between the first half of 1997 and the first half of 1998, leading us to conclude that the suppliers are willing to furnish home oxygen equipment and services even at the reduced rates. Although claims data for the first half of 1998 are not final, our claims data analysis from prior periods indicates that use rates established from preliminary data closely approximate the final results. However, subtle shifts in the kinds of oxygen equipment provided are not evident in aggregate claims data. For example, claims data do not identify the types of portable tanks provided to beneficiaries. Therefore, it is not possible to determine from the claims data how many beneficiaries are receiving lightweight portable tanks and how many are using the cart-mounted E tanks. Similarly, claims data do not indicate the number of refills provided to patients each month, so we could not determine if the frequency of tank refills has changed since the rate reduction. Overall, we found no evidence that home oxygen patients who are more expensive or difficult to serve—such as those who live in rural areas, need lightweight portable equipment, or require high-liter-flow systems—were adversely affected by the payment cuts. In response to the substantial payment reductions, suppliers could have been expected to try to reduce costs, making these higher-cost patients more vulnerable to treatment changes. Although we looked for indications that suppliers had refused to serve these special needs patients, limited the types of equipment made available, or reduced service levels, our interviews with suppliers, discharge planners, patient advocates, and physicians indicated that most Medicare beneficiaries continued to have access to appropriate equipment options. The only indication of access problems that we found occurred in Anchorage, Alaska, where pulmonary clinicians stated that liquid systems are no longer available on assignment to their Medicare patients. Beneficiaries in rural areas have always faced restrictions on home oxygen options, but their access, according to hospital discharge planners we interviewed, appears unchanged. These beneficiaries are more expensive to serve because they are farther from suppliers’ facilities and distances between patients are greater. Suppliers who serve patients in remote areas informed us that it is difficult to support the full range of equipment options because of such factors as vast distances, poor road conditions, and unpredictable weather but that this situation existed before the 1998 payment reductions. Several suppliers told us that they generally cannot provide liquid oxygen to people who live 40 to 60 miles from their facility. However, hospital discharge planners in New Mexico and South Dakota told us that the Medicare payment reduction has not affected their ability to arrange appropriate home oxygen services for their patients, even those who live in the most remote parts of those states. Another challenge in providing adequate options in rural areas is the number of suppliers and the degree of competition for patients. A patient who lives in an isolated South Dakota town may have only one or two suppliers to choose from. Thus, the need to maintain market share may not motivate suppliers in these areas to provide certain costlier equipment and services. In contrast, a representative of a major regional supplier in the Washington, D.C., area said that it had begun to evaluate patients more carefully before providing them liquid systems. Nevertheless, the supplier intended to keep liquid oxygen as an option to maintain positive relationships with referral sources, who can choose from numerous suppliers. Discharge planners in a hospital on Cape Cod, Massachusetts, told us they have not had any problems finding suppliers to take Medicare assignment on liquid oxygen for their patients because Boston and Providence are nearby, and there are many suppliers in the area. In many rural areas, the choice of home oxygen supplier is much more limited. Although the equipment and refill needs of highly mobile patients are more expensive to meet than those of relatively inactive patients, most discharge planners, pulmonary rehabilitation professionals, and suppliers we interviewed believe these patients’ needs are increasingly being met with lightweight, portable gas tanks with conserving devices. This relatively new technology can be less expensive than liquid units and, for patients who can tolerate an oxygen conserving device, still provide greater mobility than heavier gas tanks mounted on carts. We found no indication that patients who require a high-liter-flow system have less access to the proper equipment now than before the payment reduction, except in Alaska. High-liter-flow patients are more expensive to serve than other patients because they require more frequent deliveries of gas or liquid oxygen. The Medicare payment system recognizes that suppliers’ costs are higher for these patients and allows a 50-percent increase in the payment for a stationary unit for patients who require over 4 liters of oxygen per minute. Medicare does not reimburse suppliers separately for the portable unit if the high-liter-flow adjustment is paid, but many of the suppliers we met with agreed that the adjustment adequately compensated them for their added costs. Fewer than 2 percent of paid home oxygen claims were for high-liter-flow patients, which was consistent with information we received from clinicians. Though advances in technology have made oxygen concentrators more effective at delivering flow rates of up to 6 liters per minute, several pulmonologists and respiratory therapists we met with said that liquid oxygen is the preferred option for these patients. Even before the Medicare payment reductions, many suppliers were not providing liquid oxygen for high-liter-flow patients who lived far from their facilities. For these patients, suppliers sometimes provide a high-liter-flow concentrator, link two concentrators together to increase the overall liter flow, or supply compressed gas. The hospital discharge planners and suppliers we talked with said they were able to make arrangements with suppliers for all patients with high-liter-flow needs. In contrast to our findings looking at the country as a whole, we did identify concerns about lack of access to liquid oxygen systems in the Anchorage, Alaska, area. According to the Pulmonary Education and Research Foundation, letters from Medicare beneficiaries, and interviews with a pulmonologist and respiratory therapists in Anchorage, since the Medicare payment reduction, no home oxygen suppliers there have been willing to accept Medicare assignment for liquid oxygen. While liquid oxygen systems had not generally been available in remote areas of Alaska, as in the remote parts of other states, at least one supplier was providing home liquid oxygen systems to patients in the Anchorage area on assignment before the payment reduction. After the payment reduction, the supplier replaced its liquid systems with concentrators for stationary units and either E tanks or lightweight gas tanks with conserving devices for portable use, depending on the patient’s activity level. For most patients, this was an acceptable alternative. However, some patients cannot tolerate the conserving devices or are unable to maneuver E tanks on carts, especially in the snow. Respiratory therapists in Anchorage informed us that some patients are now unable to leave their homes without help. Because there are no suppliers willing to take Medicare assignment for liquid oxygen, these patients have no other options for lightweight portable systems without incurring significant out-of-pocket costs. The mid-1990s was a period of expansion for the home oxygen industry, characterized by growth in the total number of home oxygen suppliers. This trend was reversed in 1998 after the lower Medicare payment rates took effect, as some supply companies merged or left the marketplace. Nevertheless, sufficient competition remained, providing most patients with a choice of suppliers. In addition to industry consolidation, suppliers have implemented a variety of strategies to improve the efficiency of operations and reduce costs. Overall, the number of Medicare home oxygen suppliers has declined by about 6.5 percent since the January 1998 payment reduction. The market share of the largest suppliers increased slightly from 40 percent in the first half of 1997 to 43 percent in the first half of 1998. (See table 2.) Many of the suppliers that have stopped submitting claims to Medicare for home oxygen had not previously offered the full range of home oxygen equipment options to beneficiaries but had supplied predominantly oxygen concentrators. In 1994, over 1,300 Medicare suppliers, or 22 percent, received at least 98 percent of their Medicare home oxygen revenues for concentrators and focused on serving the least costly patients. By the first half of 1998, this number had fallen to just over 1,000 firms. (See table 3.) When we asked suppliers how they have responded to the payment cuts, many said they have developed strategies to improve efficiency and maintain their profitability. These strategies include operational adjustments, such as making less frequent deliveries and service visits, purchasing more reliable equipment, reducing staff, and using fewer credentialed respiratory therapists. According to suppliers and industry representatives, some suppliers have reevaluated their product lines because, prior to the payment cuts, oxygen revenues had often subsidized less profitable medical equipment items. Other suppliers have switched patients from liquid oxygen to less expensive systems or are screening new patients more carefully before setting them up with a liquid unit. These strategies have left overall access to home oxygen equipment substantially the same, but they have changed the way that home oxygen equipment and services are provided to Medicare beneficiaries. Some suppliers we interviewed said they are maintaining their current levels of service, including providing a range of equipment options and using credentialed therapists for patient visits, for two reasons: their internal standards of patient care and their need to remain competitive with other suppliers. Many other suppliers said that they have reviewed the services they provide to determine where to reduce costs. Their strategies include more completely assessing patients’ need for liquid oxygen, carefully planning delivery routes, calling patients in advance to find out what supplies they need, keeping their trucks stocked with supplies to avoid extra trips, and reducing the frequency of maintenance visits. There is also anecdotal evidence that some suppliers, contrary to Medicare rules, have refused to deliver portable tanks when patients need refills or have limited their patients to a fixed number of refills per month. We were unable to document these practices. One supplier we talked with conducted a review of patients already on liquid oxygen to determine who could be switched to concentrators and portable lightweight gas systems equipped with an oxygen conserving device. This supplier said he consulted every patient’s physician and obtained permission to make the equipment change. Further, the patients were tested to ensure that they were able to tolerate the new lightweight portable equipment. Other firms stated that while they will not change the oxygen delivery systems they are currently providing to patients, they will provide liquid systems to new patients only if they have high-liter-flow needs or if their ambulatory needs cannot be met with the compressed gas systems available. In a November 1997 report, we made several recommendations to HCFA about its implementation of the BBA provisions, including that it monitor trends in Medicare beneficiaries’ access to the various types of home oxygen equipment; restructure the modality-neutral payment, if warranted; educate prescribing physicians about their right to specify the home oxygen systems that best meet their patients’ needs; and establish service standards for home oxygen suppliers. HCFA has made only modest beginnings in addressing the BBA provisions and our recommendations. As required by the BBA, HCFA has contracted with a PRO to evaluate access to and quality of home oxygen equipment and services provided to Medicare patients. The PRO plans to gather evidence from various sources, including Medicare claims data on equipment use patterns, hospitalization rates, and utilization of home health services by home oxygen patients. An important component of this study will be a survey of beneficiaries, suppliers, and physicians. Changes in supplier practices will be an indicator of the impact of the payment reduction. The PRO will use this information to assess whether the payment reduction has affected the types of equipment and level of services provided to home oxygen patients. HCFA has not decided whether this will be a one-time assessment or an ongoing effort to monitor trends. Results from the PRO study are not expected until January 2000. The BBA gave HHS the authority to restructure the modality-neutral payment system for home oxygen, but HCFA has not established an ongoing process for monitoring access to determine if such a restructuring is warranted. HCFA officials said they will use the results of the PRO study and the competitive bidding demonstration project to evaluate the need to restructure the oxygen payment system. However, the PRO study will not be completed until at least January 2000, or 2 years after the first payment reduction, and neither project will provide HCFA information on access problems as they develop. HCFA has the ability to monitor access indicators but has not done so. For example, HCFA could ask its contractors to track beneficiary complaints, such as insufficient refills of portable tanks or, as occurred in Anchorage, problems with access to liquid oxygen systems. Although HCFA’s claims processing contractors can specially code and track beneficiary inquiries and complaints about specific equipment and services, such as home oxygen, HCFA has not asked them to do so. Prescribing physicians and patients could better help HCFA identify access problems if they were fully informed about the home oxygen benefit. Although HCFA is able to identify both groups from claims data, HCFA has not provided these groups with information about the Medicare payment cuts or encouraged them to report access problems. For example, the pulmonary physician and therapists at the Anchorage clinic we spoke with did not know what equipment and services the Medicare home oxygen benefit covers. The National Association for Medical Direction of Respiratory Care believes that HCFA has done little to help educate doctors about their options when prescribing home oxygen. Similarly, patients may be unaware that the Medicare allowance covers all their oxygen needs, including home delivery of equipment and needed refills of portable tanks. In contrast, many VA Medical Centers provide brochures to home oxygen patients outlining the responsibilities of both the patient and the supplier. Despite the BBA mandate and our recommendations and those of HHS’s Office of the Inspector General, HCFA has not developed service standards for oxygen suppliers beyond generic requirements for all durable medical equipment suppliers. In contrast, most VA and managed care contracts specifically define service requirements, such as the frequency of maintenance visits and the level of patient education. Service standards would define what Medicare is paying for and what beneficiaries should expect from suppliers. Standards are even more important as suppliers respond to reduced payment rates. One HCFA official told us that HCFA must address those BBA requirements that have specific target dates, as well as Year 2000 computer issues, before attending to our recommendations and those of the Office of the Inspector General. HCFA has developed a set of service standards that will apply only to home oxygen suppliers that participate in the competitive pricing demonstration project. HCFA officials informed us that they will consider the effectiveness of these standards in the development of service standards applicable to all home oxygen suppliers. However, some industry representatives have criticized the demonstration project standards as being too limited to ensure an acceptable level of service for home oxygen patients. Early evidence suggests that the reduction in Medicare payment rates for home oxygen has not had a major impact on access. Generally, the access problems that we found existed before the payment reductions occurred. The PRO study HCFA has contracted for will provide a more in-depth look at this issue. Suppliers are responding in various ways to the lower payment rates. Consolidation continues to occur in the home oxygen industry, leaving fewer small firms that do not provide a full range of oxygen services. Most companies have developed varying strategies to mitigate the impact of the payment reduction, including reevaluations of operations, which have led to increased operating efficiencies and changes in how suppliers provide their patients with equipment and services. Despite these early indications that access to home oxygen has not diminished since the implementation of the payment reductions, subtle access issues may not be readily apparent, and additional problems could emerge as more and better information becomes available. Given the importance of this benefit to some vulnerable Medicare beneficiaries, especially those who live in rural areas, are highly active, or require a high liter flow, HCFA needs to be vigilant in its efforts to detect any problems. Beyond contracting for the PRO study, HCFA has not established an ongoing method for monitoring the use of this benefit and gathering the information essential to assessments of the modality-neutral payment system. Nor has HCFA developed service standards for home oxygen suppliers as required by the BBA. The continued absence of specific service standards allows suppliers themselves to decide what services they will provide home oxygen patients. We recommend that the Administrator of HCFA do the following: monitor complaints about and analyze trends in Medicare beneficiaries’ use of and access to home oxygen equipment, paying special attention to patients who live in rural areas, are highly active, or require a high liter flow; on the basis of this ongoing review, as well as the results of the PRO study, consider whether to modify the Medicare payment method to preserve access; and make development of service standards for home oxygen suppliers an agency priority in accordance with the BBA’s requirement to develop such standards. We provided draft copies of this report to HCFA, representatives of the home oxygen industry, and officials of associations representing respiratory care specialists and physicians who treat patients with chronic lung disease. The reviewers suggested some technical corrections, which we incorporated into the report. Generally, HCFA agreed with the report’s contents and concurred with our recommendations. HCFA emphasized that it has contracted for the BBA-mandated PRO study, which it believes will provide an assessment of access to home oxygen equipment. In the interim, HCFA said it is relying on this report to alert the agency to any immediate access problems. Further, HCFA believes that the payment reduction will not disrupt patient access to the home oxygen benefit, given the previous excessive rates. In light of efforts to address the Year 2000 computer issues confronting the agency and its limited resources, HCFA felt it had adequately addressed the need to monitor access to the home oxygen benefit. HCFA acknowledged that it has not developed specific service standards for the home oxygen benefit as required by law. However, officials stated that the agency intends to publish new service standards applicable to all durable medical equipment suppliers in the next few months. After that, it plans to develop specific service standards for the home oxygen benefit. While we acknowledge the extent of HCFA’s responsibilities, we believe that waiting for the PRO study to evaluate access issues is not prudent, considering the life-sustaining nature of this benefit to its users. We believe that HCFA could take steps now, with a minimal expenditure of resources, that could not only supplement the results of the PRO study but also alert the agency to access problems before the PRO study is released. HCFA stated that it will have its regional offices and contractors monitor complaints regarding access to home oxygen. The full text of HCFA’s comments is included as an appendix. Industry representatives and directors of associations representing respiratory care specialists and physicians also generally agreed with the report’s contents. However, industry representatives believe that our definition of access to home oxygen equipment should include not only the equipment provided Medicare beneficiaries but also the types of services provided them and their frequency. These industry representatives are concerned that any service standards developed by HCFA will be inadequate to ensure an acceptable level of care. They believe that clinical studies of the effects of various services on patient outcomes are necessary to fully evaluate the impact of the payment reduction. They also believe that the cost savings resulting from the payment reduction for home oxygen could be offset by higher hospital readmissions or other services used by oxygen users. Finally, they stated that the full impact of the payment reduction has not yet been felt and that monitoring of access should continue. For the purposes of this report, we based our definition of access on the Medicare coverage guidelines for the home oxygen benefit. HCFA has not defined specific service standards for this benefit, and it would not be appropriate for us to expand HCFA’s current definition of what is covered by the home oxygen benefit. Further, while evaluating patient outcomes was beyond the scope of this report, the PRO study will include specific patient outcomes, such as hospital readmissions and use of home health services, in its evaluation. We are sending copies of this report to Ms. Nancy-Ann Min DeParle, Administrator, Health Care Financing Administration, and appropriate congressional committees. We will also make copies available to others upon request. This report was prepared by Anna Kelley, Frank Putallaz, and Suzanne Rubins under the direction of William Reis, Assistant Director. Please call Mr. Reis at (617) 565-7488 or me at (202) 512-7114 if you or your staff have any questions about the information in this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO provided information on Medicare beneficiaries' access to home oxygen equipment, focusing on: (1) changes in access to home oxygen for Medicare patients since the payment reduction mandated by the Balanced Budget Act (BBA) of 1997 took effect; and (2) actions taken by the Health Care Financing Administration (HCFA) to fulfill the BBA requirements and respond to GAO's November 1997 recommendations. GAO noted that: (1) preliminary indications are that access to home oxygen equipment remains substantially unchanged, despite the 25-percent reduction in Medicare payment rates that took effect in January 1998; (2) the number of Medicare beneficiaries using home oxygen equipment has been increasing steadily since 1996, and this trend appears to have continued in 1998; (3) while Medicare claims for the first 6 months of 1998 showed a decrease in the proportion of Medicare patients using the more costly stationary liquid oxygen systems, this decline was consistent with the trend since 1995; (4) hospital discharge planners and suppliers GAO talked with said that even Medicare beneficiaries who are expensive or difficult to serve are able to get the appropriate systems for their needs; (5) further, suppliers accepted the Medicare allowance as full payment for over 99 percent of the Medicare home oxygen claims filed for the first half of 1998; (6) although these indicators do not reveal access problems caused by the payment reductions, issues such as sufficiency of portable tank refills and equipment maintenance could still arise; (7) HCFA has responded to only one BBA requirement; (8) as required by the BBA, HCFA has contracted with a peer review organization (PRO) for an evaluation of access to, and quality of, home oxygen equipment; (9) results from this evaluation are not expected before 2000; (10) meanwhile, HCFA has not implemented an interim process to monitor changes in access for Medicare beneficiaries--a process that could alert the agency to problems as they arise; (11) although not required by the BBA, such monitoring is important because of the life-sustaining nature of the home oxygen benefit; (12) until HCFA gathers more in-depth information on access and the impact of payment reductions, HCFA cannot assess the need to restructure the modality-neutral payment; (13) HCFA has not yet implemented provisions of the BBA that require service standards for Medicare home oxygen suppliers to be established as soon as practicable; and (14) service standards would define what Medicare is paying for in the home oxygen benefit and what beneficiaries should expect from suppliers. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Air Force and the Navy plan to use the JPATS aircraft to train entry level Air Force and Navy student pilots in primary flying to a level of proficiency from which they can transition into advanced pilot training. The JPATS aircraft is designed to replace the Air Force’s T-37B and the Navy’s T-34C primary trainer aircraft and other training devices and courseware. It is expected to have a life expectancy of 24 years and provide better performance and improved safety, reliability, and maintainability than existing primary trainers. For example, the JPATS aircraft is expected to overcome certain safety issues with existing trainers by adding an improved ejection seat and a pressurized cockpit. The JPATS aircraft is expected to be more reliable than existing trainers, experiencing fewer in-flight engine shutdowns and other equipment failures. It is also expected to be easier to maintain because it is to use more standard tools, and common fasteners. To calculate the number of JPATS aircraft required, the Air Force and the Navy in 1993, used a formula that considered such factors as the aircraft utilization rate, annual flying hours, mission capable rate, attrition rate, sortie length, working days, and turnaround time. The Air Force calculated a need for 372 JPATS aircraft, and the Navy calculated a need for 339, for a total combined requirement of 711 JPATS aircraft. In December 1996, the two services reviewed these requirements. At that time, the Navy approved an increase of 29 aircraft, increasing its total to 368 aircraft. This increased total requirements from 711 to 740 JPATS aircraft. The Air Force’s Air Education and Training Command—responsible for pilot training—determined that the Air Force would need 441 aircraft instead of 372 aircraft. However, the Air Force did not approve this increase. The JPATS aircraft shown in figure 1, the T-6A Texan II, is to be a derivative of the Pilatus PC-9 commercial aircraft. Raytheon Aircraft Company, the contractor, plans to produce the aircraft in Wichita, Kansas, under a licensing agreement with Pilatus, the Swiss manufacturer of the PC-9. The JPATS aircraft will undergo limited modification to incorporate several improvements and features that are not found in the commercial version of the aircraft, but are required by the Air Force and the Navy. Modifications involve (1) improved ejection seats, (2) improved birdstrike protection, (3) a pressurized cockpit, (4) an elevated rear (instructor) seat, and (5) flexibility to accommodate a wider range of male and female pilot candidates. These modifications are currently being tested during the qualification test and evaluation phase, which is scheduled to be completed in November 1998. Initial operational capability is planned for fiscal year 2001 for the Air Force and fiscal year 2003 for the Navy. The Air Force and the Navy competitively selected an existing commercial aircraft design to satisfy their primary trainer requirements instead of developing a new trainer aircraft. This competitive acquisition strategy, according to Air Force officials, resulted in original program estimates of about $7 billion being reduced to about $4 billion upon contract award. The Air Force, as executive agent for the program, awarded a contract to Raytheon in February 1996 to develop and produce between 102 and 170 JPATS aircraft with the target quantity of 140, along with simulators and associated ground based training system devices, a training management system, and instructional courseware. The contract included seven production options. Through fiscal year 1997, the Air Force has exercised the first four options, acquiring 1 aircraft for engineering and manufacturing development and 23 production aircraft. A separate contract was awarded to Raytheon for logistics support, with options for future years’ activities. Production is scheduled to continue through 2014. In 1996, the Air Force and the Navy calculated the number of JPATS aircraft required using several factors, including projections of JPATS mission capable rates and projected attrition rates based on historical experience. However, the data they used in their calculations contained various inconsistencies. For example, the projections of JPATS aircraft mission capable rates of 91 percent and 80 percent used by the Air Force and the Navy, respectively, to calculate the requirements differed substantially from each other and from the 94-percent rate included in the contract for procurement of the aircraft. The result of using lower mission capable rates to calculate aircraft quantities is that more aircraft would be needed to achieve annual flying hour requirements for training than if higher rates were used. Furthermore, the Air Force’s projected attrition rates were not consistent with historical attrition experience with its existing primary trainer, and the Navy used a rate that differs from the rate that DOD now says is accurate. Until these inconsistencies are resolved, it is unclear how many JPATS aircraft should be procured. Although the Air Force and the Navy are procuring the same JPATS aircraft to train entry level pilots and the aircraft will be operated in a joint training program, they used different mission capable rates to calculate aircraft requirements. Specifically, the Air Force used a 91-percent mission capable rate and the Navy used an 80-percent rate. Neither of these rates is consistent with the JPATS contract that requires Raytheon to build an aircraft that meets or exceeds a 94-percent mission capable rate. Therefore, we recalculated the Air Force and the Navy total JPATS aircraft requirements using the same formula as the Air Force and the Navy, and substituting the 94-percent contract mission capable rate in place of the rates used by the Air Force and the Navy. Table 1 shows how higher mission capable rates could decrease JPATS aircraft quantity requirements by as many as 60 aircraft—10 for the Air Force and 50 for the Navy. The attrition rate used by the Air Force to calculate the number of JPATS aircraft needed was more than twice the attrition rate of its current primary trainer that was placed in service in the late 1950s. The Air Force estimated that 1.5 JPATS aircraft would be lost or damaged beyond repair for every 100,000 flying hours. However, the historic attrition rate for the current primary trainer is 0.7 per 100,000 flying hours. Although DOD advised us that single-engine trainers such as JPATS are expected to have higher attrition rates than two-engine trainers such as the T-37B, we note that important JPATS features are increases in safety and reliability, including fewer in-flight engine shutdowns and other equipment failures. In addition, use of an advanced ground based training system, being acquired as part of the JPATS program, is expected to result in greater pilot familiarity with the aircraft’s operation prior to actual flights. Data provided by the Navy and DOD regarding attrition rates are conflicting. For example, the Navy’s calculations in 1996 used an attrition rate of 1.5 aircraft per 100,000 flight hours to calculate the required quantity of JPATS aircraft. To derive this rate, the Navy factored in the attrition experience of the existing T-34C trainer, using a lifetime attrition rate of 0.4 per 100,000 flight hours. However, in commenting on a draft of this report, DOD stated that the lifetime attrition rate for the T-34C is 2.1 aircraft per 100,000 flying hours and the Navy provided data that it believed supported this rate. However, our analysis showed that the data supported a rate of 3.6 aircraft per 100,0000 flying hours, which differs from both the Navy and DOD figure. The JPATS aircraft procurement plan does not take advantage of the most favorable prices provided by the contract. The contract includes annual options with predetermined prices for aircraft orders of variable quantities. Procurement of fewer than the target quantity can result in a unit price increase from 1 to 52 percent. Procurement above the target quantity, or at the maximum quantity, however, provides very little additional price reduction. The contract contains unit price charts for the variation in quantities specified in lots II through VIII. The charts contain pricing factors for various production lot quantity sizes that are used in calculating unit prices based on previous aircraft purchases. The charts are designed so that the unit price increases if the number of aircraft procured are fewer than target quantities and decreases if quantities procured are more than target quantities. As shown in table 2, lots II through IV have been exercised at the maximum quantities of 2 (plus 1 developmental aircraft), 6, and 15. According to the procurement plan, 18 aircraft are to be procured during fiscal year 1998 and 12 aircraft during fiscal year 1999, resulting in a total of 30 aircraft. All of these aircraft are being procured by the Air Force. In fiscal year 2000, the Navy is scheduled to begin procuring JPATS aircraft. Our analysis shows that DOD can make better use of the price advantages that are included in the JPATS contract. For example, as shown in table 3, 30 aircraft can be procured more economically if 16, rather than 18, aircraft are procured in fiscal year 1998 and 14, rather than 12, aircraft are procured in fiscal year 1999. If as few as 16 aircraft were procured in fiscal year 1998, they could be acquired at the same unit price as currently planned because the unit price would not increase until fewer than 16 JPATS aircraft were procured in fiscal year 1998. Deferring 2 aircraft from fiscal year 1998 to fiscal year 1999 would increase the quantity in fiscal year 1999 from 12 to 14, resulting in a reduction of the unit price for fiscal year 1999, from $2.905 million to $2.785 million. This deferral would not only save $1.360 million over the 2 years but also reduce the risk of buying aircraft before the completion of operational testing by delaying the purchase of two aircraft and permitting more testing to be completed. DOD could also save money if it altered its plans to procure 26 aircraft in fiscal year 2000, which is a quantity lower than the target of 32 aircraft. The unit price could be reduced by $104,212, or 4 percent, if DOD procured the target quantity. In addition, once the JPATS aircraft successfully completes operational test and evaluation, the aircraft could be procured at the more economical, or target, rates. Our analysis demonstrates that maintaining yearly production rates at least within the target range is more economical than production rates in the minimum range. As we previously reported, economical procurement of tested systems has often been hindered because DOD did not provide them with high enough priority. The JPATS cockpit is expected to meet DOD’s requirement that it accommodate at least 80 percent of the eligible female pilot population. Pilot size, as defined by the JPATS anthropometric characteristics, determines the percentage of pilots that can be accommodated in the JPATS cockpit. JPATS program officials estimate that the planned cockpit dimensions will accommodate approximately 97 percent of the eligible female population anthropometrically. The minimum design weight of the JPATS ejection seat (116 pounds) will accommodate 80 percent of the eligible female population. Because concerns have been raised about the ability of JPATS aircraft to accommodate female pilots, Congress directed DOD to conduct studies to determine the appropriate percentage of male and female pilots that could be accommodated in the cockpit. A DOD triservice working group studied the issue and concluded that a 32.8-inch minimum sitting height, instead of 34 inches, is one of several variables that would allow for accommodation of at least 80 percent of the eligible female population. The DOD working group determined that this change in sitting height would not require major development or significantly increase program risk. Thus, the Office of the Secretary of Defense established 32.8 inches as the new JPATS minimum sitting height requirement. In addition, the minimum weight requirement for the JPATS ejection seat was lowered from 135 pounds to 116 pounds to accommodate 80 percent of the eligible female population. Another study is being conducted to investigate the potential, at minimum additional cost, for an ejection seat with a lighter minimum weight limit that might accommodate more than 80 percent of the female pilot trainee population. Phase one of that study is scheduled to be completed in the fall of 1997. DOD is proceeding with plans to procure a fleet of JPATS aircraft that may exceed the quantity needed to meet training requirements. Until inconsistencies in the data used to calculate JPATS requirements are resolved, it is unclear how many aircraft should be procured. Furthermore, DOD’s schedule for procuring the aircraft does not take advantage of the most economical approach that would allow it to save money and permit more time for operational testing. We, therefore, recommend that the Secretary of Defense determine the appropriate attrition rates and mission capable rates to calculate JPATS requirements, taking into account the planned improvements in JPATS safety, reliability, and maintainability, and recalculate the requirements as appropriate and direct the Air Force to revise the JPATS procurement plan to take better advantage of price advantages in the contract, and upon successful completion of operational test and evaluation, acquire JPATS aircraft at the most economical target quantity unit prices provided by the contract. In commenting on a draft of this report, DOD did not agree with our conclusion that DOD overstated JPATS requirements or with our recommendation that the Secretary of Defense direct the Air Force and the Navy to recalculate aircraft requirements. DOD partially concurred with our recommendation to buy JPATS aircraft at the most economical target unit prices provided in the contract. DOD believed that the Air Force and the Navy used appropriate attrition rates and mission capable rates to calculate JPATS requirements and that these rates accounted for improvements in technology and mechanical reliability. It noted that we had incorrectly identified the T-34C aircraft attrition rate as 0.4 aircraft per 100,000 flying hours rather than 2.1 aircraft per 100,000 flying hours. The Navy provided data that it believed supported DOD’s position, but our analysis showed that this data supported an attrition rate that differed from both the Navy and DOD rate. Furthermore, DOD stated that the 94-percent mission capable rate cited in the JPATS contract is achievable only under optimal conditions and that the lower mission capable rates used by the Air Force and the Navy are based on the maximum possible aircraft use at the training sites. Although DOD stated that the Navy used a mission capable rate of 87 percent, our analysis showed that the Navy used a rate of 80 percent. Because of the inconsistencies and conflicts in the attrition and mission capable rate data between DOD and the services, we revised our conclusion to state that, until these discrepancies are resolved, it is unclear how many aircraft should be procured and revised our recommendation to call for the Secretary of Defense to determine the appropriate rates and recalculate JPATS requirements as appropriate. DOD agreed that procuring aircraft at the most economical price is desirable and stated that it will endeavor to follow this approach in future JPATS procurement. It, however, noted that competing budget requirements significantly affect procurement rates of all DOD systems and that limited resources generally make procurement at the most economical rates unachievable. DOD’s written comments are reprinted in appendix I. To review service calculations of JPATS requirements, DOD’s procurement schedule for the aircraft, and efforts to design the JPATS cockpit to accommodate female pilots, we interviewed knowledgeable officials and reviewed relevant documentation at the Office of the Under Secretary of Defense (Acquisition and Technology) and the Office of the Secretary of the Air Force, Washington D.C.; the Training Systems Program Office, Wright-Patterson Air Force Base, Ohio; the Air Force Air Education and Training Command, Randolph Air Force Base, Texas; the Navy Chief of Naval Air Training Office, Corpus Christi, Texas; and the Raytheon Aircraft Company, Wichita, Kansas. We examined Air Force and Navy justifications for using specific attrition rates, mission capable rates, and flying hour numbers in determining aircraft quantities. We also analyzed the variation in quantity unit price charts in the procurement contract to determine the most economical way to procure JPATS aircraft. In addition, we reviewed congressional language on cockpit accommodation requirements and current program estimates of compliance with that requirement. This review was conducted from September 1996 to July 1997 in accordance with generally accepted government auditing standards. As the head of a federal agency, you are required under 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight no later than 60 days after the date of this report. A written statement must also be submitted to the Senate and House Committees on Appropriations with an agency’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to the Secretaries of the Navy and the Air Force and to interested congressional committees. We will also make copies available to others upon request. Please contact me at (202) 521-4587 if you or your staff have any questions concerning this report. The major contributors to this report were Robert D. Murphy, Myra A. Watts, and Don M. Springman. The following are GAO’s comments on the Department of Defense’s (DOD) letter dated July 17, 1997. 1. The Navy, in deriving the projected attrition rate of 1.5 aircraft losses per 100,000 flying hours for Joint Primary Aircraft Training System (JPATS) aircraft, used a 0.4-lifetime attrition rate for the T-34C in determining total aircraft requirements. DOD, in its response to our draft of this report, stated that the actual lifetime attrition rate for the T-34C is 2.1; however, the data provided to support that rate indicated an attrition rate of 3.6 aircraft per 100,000 flying hours. Because the attrition rate figures provided to us for the Navy’s T-34 differ substantially, the Air Force’s estimated attrition for JPATS aircraft is twice the rate experienced on the T-37, and the Air Force’s Air Education and Training Command has revised its calculations of requirements, we believe reassessment of requirements for JPATS aircraft is needed. 2. The JPATS production contract specifies the aircraft shall meet or exceed a 94-percent mission capable rate for the total hours the aircraft is in the inventory and does not specify the severity of conditions. Although the Navy now maintains that its requirement was for a primary trainer aircraft with an 87-percent mission capable rate, the Navy used, and continues to use, an 80-percent mission capable rate in calculating JPATS aircraft quantity requirements. The latest JPATS Operational Requirements Document, issued December 1996, shows an 80-percent mission capable rate for the Navy, not 87 percent as indicated in DOD’s response to our draft report. 3. We recognize that limited resources and competing budget requirements affect production rates; however, the point we made was that DOD’s procurement plan (the future years defense plan) for acquisition of JPATS aircraft did not make the best use of the limited resources that had already been assigned to the JPATS program. Our report, on page 6, illustrates how, with fewer resources, the Air Force could have acquired the same number of aircraft over a 2-year period. The illustration is valid, in that it shows that the DOD procurement plan was not the most effective and that it should be reassessed. Indeed, the procurement quantities in the plan for fiscal years 1999 and 2000 continue to include insufficient quantities for DOD to take advantage of the most favorable prices in the contract, and without a reassessment and a change to the plan, Congress may need to ensure that resources are used most effectively. 4. DOD did not provide us information to show how historical data for single-engine trainer aircraft were used to predict the JPATS rate of 1.5 losses per 100,000 flight hours. We believe that a predicted attrition rate for JPATS aircraft that is twice that of 40-year old T-37 trainers does not account for improvements that are to be incorporated in JPATS aircraft. 5. We do not believe it is premature at this time to reassess JPATS requirements. We believe reassessment is needed now because the Navy has provided several different attrition rates, all of which are intended to represent T-34 historical experience; the proposed JPATS attrition rate is twice the historical rate of the Air Force T-37; and the Air Force and the Navy continue to project different mission capable rates for JPATS aircraft that are lower than the rate the aircraft is required to demonstrate under the contract. We agree that, as experience is gained with the JPATS aircraft, the quantities should also be reassessed periodically. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed: (1) the Air Force's and Navy's calculations of the quantity of Joint Primary Aircraft Training System (JPATS) aircraft needed to meet training requirements; (2) the impact of the Department of Defense's (DOD) procurement schedule on the aircraft's unit price; and (3) service efforts to design the JPATS cockpit to accommodate female pilots. GAO noted that: (1) the Air Force and the Navy used inconsistent data to calculate the number of JPATS aircraft required for primary pilot training; (2) the Air Force used an attrition rate that was twice as high as the historical attrition rate for its existing primary trainer and the Navy used an attrition rate that differs from the rate that DOD now cites as accurate; (3) until inconsistencies in the mission capable rates and attrition rates are resolved, it is unclear how many JPATS aircraft should be procured; (4) DOD's procurement plan for acquiring JPATS aircraft does not take full advantage of the most favorable prices available in the contract; (5) for example, the plan schedules 18 aircraft to be procured during fiscal year (FY) 1998 and 12 aircraft during FY 1999, a total of 30 aircraft; (6) however, GAO found that these 30 aircraft could be procured more economically if 16 rather than 18 aircraft are procured in FY 1998 and 14 rather than 12 aircraft are procured in FY 1999; (7) this approach would save $1.36 million over the 2 fiscal years and permit more operational testing and evaluation to be completed; (8) furthermore, the procurement plan does not schedule a sufficient number of JPATS aircraft for procurement in fiscal year 2000 to achieve lower prices that are available under the terms of the contract; (9) because concerns had been raised about the ability of JPATS aircraft to accommodate female pilots, Congress directed DOD to study and determine the appropriate percentage of the female pilot population that the aircraft should physically accommodate; (10) based on its studies, DOD established the requirement that the JPATS aircraft be able to accommodate 80 percent of the eligible female pilot population; (11) pilot size determines the percentage of pilots that can be accommodated in the JPATS cockpit; (12) planned cockpit dimensions are expected to accommodate about 97 percent of the eligible female pilot population; and (13) to permit safe ejection from the aircraft, the ejection seat minimum pilot weight is 116 pounds, which is expected to accommodate 80 percent of the eligible female pilot population. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Because the models that comprise the ALSP Confederation were built in the early 1980s to meet service-specific needs, they lack the ability to simulate many aspects of joint warfare, including operations other than war, strategic mobility, space, intelligence, and logistics capabilities. These models also lack the capability to represent many combat interactions, such as ground to ship. Because of the existing shortfalls of the services’ individual models, the ALSP Confederation can only fully support 2 of 25 identified CINC and service joint training requirements. This also may be the result of the fact that requirements for application of the ALSP technology were solicited from the CINCs and services only after development of the technology. The services have long recognized the technical and training shortfalls of their respective models for accurately portraying joint operations. The Army’s Corps Battle Simulation is a ground maneuver training simulation used in exercises for commanders and battle staffs. The Army model lacks the capability to simulate weather information, the terrain of the battlefield, and ground-to-ground combat interactions with the Marine Corps model. The Air Force’s Air Warfare Simulation used to support air operations has limited capability to simulate electronic warfare, reconnaissance and surveillance play, and space capabilities. The Navy’s sanctioned training model, the Enhanced Naval Wargaming System, operates on a hardware system that cannot interface with ALSP. The Navy has been modifying this model for acceptance into the confederation since 1993. Navy officials were unable to elaborate on the joint training benefits that would be achieved from these modifications. According to service modeling and simulation officials and after-action reports, the Research, Evaluation, and Systems Analysis Simulation—a naval analytical model—has been used successfully in the current ALSP Confederation. In 1994, the Marine Corps introduced a new amphibious operations simulation, the Marine Air Ground Task Force Tactical Warfare Simulation, into the ALSP Confederation. The Office of the Secretary of Defense created the Defense Modeling and Simulation Office to serve as the focal point for modeling and simulation under the Director, Defense Research and Engineering. The DOD Executive Council for Modeling and Simulation, chaired by the Director, Defense Research and Engineering, advises and assists the Under Secretary of Defense for Acquisition and Technology in modeling and acquisition decisions. The JSIMS program is a jointly managed DOD program with the Air Force providing acquisition oversight. The JSIMS Joint Program Office, under the Air Force Program Executive Officer for Combat Support Systems, has been designated as an acquisition activity for JSIMS. The Army’s Simulation, Training and Instrumentation Command is the executive agent for the day-to-day management of the ALSP Confederation. The development of JSIMS is already a year behind schedule and a clear, consistent definition of JSIMS is still evolving. According to the June 1994 joint memorandum of agreement, a clear definition was due of what constitutes JSIMS within 4 months of the signing of the memorandum. Also due was a detailed plan of action in the form of a JSIMS Joint Program Office charter and JSIMS master plan delineating duties, responsibilities, mission, scope, and strategies for implementing JSIMS. However, lack of agreement among the services as to what JSIMS entails has delayed approval of the charter and the plan. The services have different interpretations of the memorandum of agreement. The low end of expectations is a set of standards and protocols that would allow interoperability for the services’ next generation of simulations. The high end of expectations is a “super model” in which JSIMS would describe all of the objects, such as aircraft, for all of the services and determine all warfare functions. During July 1995, the Acting Assistant Secretary of the Air Force for Acquisition approved milestone 0 for the JSIMS program, which authorizes proceeding into the concept exploration and definition phase of the acquisition cycle. At that time, the JSIMS Joint Program Office stated that JSIMS would comprise (1) a core element of common functions, such as terrain and weather effects and (2) warfare functions, such as air, ground, and naval combat, and logistics. Common core development would be the responsibility of the JSIMS Joint Program Office while warfare function development will be the responsibility of designated executive agents. The executive agents will develop a joint representation of their warfare area that would then be integrated with the JSIMS core. The Army is the executive agent for land warfare, the Air Force for air and space warfare, and the Navy for sea warfare. The Marine Corps’ missions will be included throughout these executive agents’ warfare representations. Further, the 1994 memorandum of agreement stated that JSIMS should also be adaptable to other modeling and simulation applications, such as analysis and testing. However, in February 1995, the Deputy Secretary of Defense directed the Director, Program Analysis and Evaluation, to initiate and lead development of a new joint analysis model called the Joint Warfare System (JWARS). Program Analysis and Evaluation officials informed us that they believed improvements to DOD’s analytical capability needed to be made now and they could not afford to wait for JSIMS to become a reality. The JSIMS’ focus is now solely on providing a simulation environment for joint task force training. Coordination between the JSIMS and JWARS programs is being worked out. Currently, the major stumbling block for JSIMS is how to fund the $416 million program since there is no central funding line for the program. Some military service officials have expressed concerns about the piecemeal approach of funding JSIMS. As of July 1995, the JSIMS’s core element was estimated to cost about $154 million. Under the provisions of the joint memorandum of agreement, the Army, the Air Force, and the Navy have each agreed to provide 30 percent of this cost. The Marine Corps will provide 10 percent of the cost. In addition to the $154 million, the executive agents will incur additional costs, currently estimated at a total of $262 million, to develop simulations for their specific warfare functions. The problem with this approach is that if a service believes that improving its own core competencies has a higher priority to fund than its responsibilities for JSIMS, that function for JSIMS may not be developed in concert with the other required components. The military services are proceeding to develop the next generation of simulations that will better address their specific mission or core requirements. The services are also responsible for ensuring that these simulations are able to function within the JSIMS’ domain. The Army’s program, Warfighters’ Simulation 2000, is estimated to cost about $200 million and be operational by 2000. The Air Force is developing the National Air and Space Warfare Model that is estimated to cost about $103 million and be fully operational by 2003. The Navy is developing an analytical simulation, the Naval Simulation System, at an initial estimated cost between $15 million and $25 million that could be enhanced at an additional cost of about $47 million to function in a training capacity. Unless decisive management is exercised, these service efforts may outpace JSIMS’ core development and require additional modifications to operate in the JSIMS’ domain. According to DOD officials, several recent events have occurred that demonstrate the JSIMS program is moving forward. First, on July 14, 1995, the Director, Defense Research and Engineering, chaired the first JSIMS Senior Review Board at which the members agreed to provide their share of the JSIMS Joint Program Office permanent staff. Second, the Under Secretary of Defense for Acquisition and Technology signed a memorandum on August 8, 1995, calling on DOD components to formally adopt a proposed division of funding and personnel requirements. Third, the Deputy Secretary of Defense endorsed the establishment of a joint core funding line with the services providing both their share of core funding and personnel to staff the JSIMS Joint Program Office. In addition, the Director, Defense Research and Engineering, and the Joint Staff are to provide a share of funding for the JSIMS core program. However, we note that these actions have not been formalized. Concurrent with the development of JSIMS, DOD has decided to make improvements to the ALSP Confederation, the last of which is expected to be in place in 1999—at the same time that JSIMS should reach initial operational capability. According to the ALSP Master Plan, the improvements are intended to respond to the identified CINC and service training requirements and include additional capabilities such as strategic mobility and ground-to-ground combat interactions between models. Even though officials from the Office of the Secretary of Defense’s Director, Defense Research and Engineering, the Defense Modeling and Simulation Office, and the Army Simulation, Training, and Instrumentation Command told us that the total cost of these improvements will not be significantly high, none of these offices was able to provide comprehensive cost estimates. We identified about $40 million that DOD plans to spend for ALSP Confederation improvements through fiscal year 1999. However, because this money may be directed toward service-specific improvements rather than joint improvements, the cost could be higher. As is the case with JSIMS, there is no central funding line for the ALSP Confederation improvements. Consequently, DOD’s ability to achieve all of the improvements that it seeks is dependent on funding from the individual military services, agencies, or CINCs. However, to date the Office of the Under Secretary for Acquisition and Technology has not provided the management to ensure that all significant components of the ALSP improvements will be completed. Consequently, management of the improvements has been fragmented and it is questionable whether the improvement plan is cost-effective. For example, the Army has decided not to fund ALSP improvements to its ground warfare model, which is a primary component of the ALSP Confederation. The Army is proceeding to develop its new training model, Warfighters’ Simulation 2000. The Army has already awarded contracts for the new model’s development. The impact of the Army’s decision not to fund ground warfare improvements on other confederation model improvement efforts or future training requirements is unknown. In contrast, the Air Force is spending about $7 million to consolidate two versions of its air warfare model and plans to enter the combined model into the ALSP Confederation in 1997. The consolidation effort will result in combining the best features of the two versions, as well as preventing future duplicative efforts. The Navy has been spending nearly $2 million annually to replace its current confederation model by fiscal year 1997. Navy officials, however, could not specify how the replacement model would improve the confederation’s joint training capability. The U.S. Transportation Command and the U.S. Space Command are each modifying models for inclusion into future confederations that would expand the ALSP Confederation’s capability. To help ensure the total development of JSIMS, we recommend that the Secretary of Defense establish a joint funding line for the core development of JSIMS and direct the Secretaries of the Army, the Navy, and the Air Force to establish funding lines for their respective executive agent JSIMS responsibilities regarding warfare function development. Further, we recommend that the Secretary of Defense require the Under Secretary for Acquisition and Technology to assume a stronger management role to resolve simulation issues by defining JSIMS and developing a definitive plan of action and developing a transition strategy to phase out ALSP and phase in JSIMS. This strategy should be based upon cost estimates associated with modifying, expanding, and testing the ALSP Confederation to decide which improvements to the ALSP Confederation provide benefits that are cost-effective. In written comments on a draft of our report, DOD generally agreed with our findings and recommendations (see app. I). The Department said that it recognizes the shortcomings of today’s joint training simulations and is committed to developing more cost-effective capabilities. In response to our recommendations, DOD said that it has taken action to establish a joint funding line for the JSIMS core and to ensure service support for their respective combat representations. DOD stated that a plan to phase out ALSP and phase in JSIMS will be developed based on both technical considerations provided by the Under Secretary of Defense for Acquisition and Technology and operational considerations provided by the services and CINCs. However, DOD did not agree with our assessment of the status of the JSIMS program. The Department does not believe that the JSIMS program has been stalled. DOD said that (1) it deliberately established ambitious milestones in the JSIMS memorandum of agreement to serve as an action to move the project along; (2) the JSIMS project has moved from a general consensus agreement, through stand-up of a transitional JSIMS Joint Program Office, to the formation of a permanent Joint Program Office; (3) a systems definition for JSIMS was developed in an April 1995 meeting; (4) the JSIMS Operational Requirements Document is in final review; and (5) the program officially entered the Concept Exploration and Definition phase when it attained milestone 0 status during July 1995. Our assessment of the status of JSIMS is based upon documentation provided to us during our review. The various management groups responsible for development of JSIMS have conducted numerous meetings in an effort to bring about a consensus of what JSIMS constitutes. However, we believe that JSIMS has been stalled at a fundamental level as evidenced by the minimal progress since the signing of the June 1994 memorandum of agreement. At the conclusion of our review, there were indications that the program might be progressing. However, no actions had been finalized. A permanent charter for the Joint Program Office as called for by October 1994 is still not established. The JSIMS Operational Requirements Document is still not approved. According to documents presented at the July 1995 JSIMS Senior Review Board meeting, the estimated cost to develop JSIMS core and warfare functions is $416 million. The JSIMS core without the warfare functions will not achieve DOD’s joint training objectives. Therefore, we believe it is important to identify all development costs. DOD said that it could not substantiate the $40 million we identified that the services are planning to spend on ALSP improvements. DOD stated that $6.1 million is currently budgeted for ALSP core support through fiscal year 1999. DOD acknowledged that all other funding for modifications in the ALSP models is provided by the services or CINCs but could not substantiate this figure. The $40 million figure was derived from documents and discussions held with service budget officials and is subject to change depending upon the services’ priorities for spending. To determine whether DOD is progressing with its development of JSIMS, we interviewed knowledgeable officials from the Defense Modeling and Simulation Office, Washington, D.C.; the Joint Staff, Washington, D.C.; the Joint Warfighting Center, Fort Monroe, Virginia; the JSIMS Joint Program Office, Orlando, Florida; and the services’ modeling and simulation management offices in Washington, D.C. In addition, we interviewed the Director, Defense Research and Engineering, Office of the Secretary of Defense. We reviewed the draft DOD Modeling and Simulation Master Plan; the Executive Council for Modeling and Simulation meeting minutes; and DOD, Joint Staff, and service modeling and simulation policies. In addition, we reviewed related Defense Science Board and DOD Inspector General reports. To determine whether DOD’s decisions to improve the ALSP Confederation are cost-effective, we interviewed modeling and simulation officials at the Simulation, Training, and Instrumentation Command, Orlando, Florida; the Warrior Preparation Center, Einsiedlerhof Air Station, Germany; the Joint Training Analysis and Simulation Center, Suffolk, Virginia; and the National Simulation Center, Fort Leavenworth, Kansas. We reviewed numerous documents on the ALSP Confederation. We discussed the costs of simulation improvements with each of the service model’s proponents. We conducted our work between January 1995 and August 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen, Senate and House Committees on Appropriations, Senate Committee on Armed Services, and House Committee on National Security; the Secretary of Defense; the Deputy Secretary of Defense; the Under Secretary of Defense for Acquisition and Technology; the Director, Defense Research and Engineering; and the Secretaries of the Army, the Navy, and the Air Force. We will make copies available to others on request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report were Charles J. Bonanno, Brenda S. Farrell, Raymond G. Bickert, and Colin L. Chambers. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) development of the Joint Simulation System (JSIMS), focusing on whether DOD: (1) is progressing with its development of JSIMS; and (2) decisions to improve the Aggregate Level Simulation Protocol (ALSP) Confederation are cost-effective. GAO found that: (1) JSIMS has not progressed beyond the conceptual stage due to internal disagreements within DOD; (2) further JSIMS development is contingent on the availability of about $416 million in funding; (3) DOD plans to spend at least $40 million through 1999 to improve ALSP before replacing it with JSIMS, but it is unclear whether that approach is cost-effective; (4) funding availability depends on how the services prioritize their contributions to JSIMS and ALSP; (5) the cost of ALSP improvements could increase because the planned improvements are service-specific and there is also no ALSP Central Funding; and (6) management of ALSP improvements is fragmented because DOD is not ensuring that the services will complete them and that the improvements are cost-effective. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Rural communities often have small or aging drinking water and wastewater systems. The need for a water project can arise for multiple reasons, including replacing or upgrading outdated or aging equipment that does not treat water to meet water quality standards and systems that do not produce water to meet new treatment standards. For example, arsenic is often present naturally in groundwater, and to meet new federal arsenic standards for drinking water, many rural communities using groundwater as a drinking water source will have to improve their drinking water systems to remove arsenic. EPA estimates that drinking water and wastewater infrastructure for small communities over the next several decades could cost more than $100 billion. This section describes (1) federal funding for drinking water and wastewater infrastructure projects in rural communities; (2) the process for applying for these federal funds, including the requirements state and federal agencies must ensure rural communities meet under the National Environmental Policy Act; and (3) our prior work on coordination among federal agencies and rural water infrastructure programs. The federal government administers a number of programs that assist rural communities in developing water and wastewater systems and complying with federal regulations, with EPA’s drinking water and clean water SRF programs and USDA’s RUS program providing the most funding. Communities typically pay for drinking water and wastewater infrastructure through the rates charged to users of the drinking water and wastewater systems. Large communities serve many people and can spread the cost of infrastructure projects over these numerous users, which makes projects more affordable. Small or rural communities have fewer users across which to spread rate increases, making infrastructure projects less affordable and these communities more reliant on federal funding to help lower the cost of projects through lower interest rates or grants that do not need to be repaid. The Safe Drinking Water Act and the Clean Water Act authorize the Drinking Water SRF and Clean Water SRF programs, respectively, as well as EPA’s authority to regulate the quality of drinking water provided by community water supply systems and the discharge of pollutants into the nation’s waters. Under the Safe Drinking Water Act, EPA sets standards to protect the nation’s drinking water from contaminants, such as lead and arsenic. In 1996, amendments to the act established the drinking water SRF program to provide assistance for publicly and privately owned drinking water systems. Under the Drinking Water SRF program, states make loans and are required to provide a certain percentage of funding in loan assistance to communities of less than 10,000. The Clean Water Act is intended to maintain and restore the physical, chemical, and biological integrity of our surface waters, such as rivers, lakes, and coastal waters. In 1987, amendments to the Clean Water Act established the Clean Water SRF program to provide assistance to publicly owned wastewater treatment facilities. Using the federal funds EPA provides to capitalize the state SRF programs, states provide loans to communities for drinking water and wastewater treatment projects. In order to qualify, states must contribute an amount equal to 20 percent of the federal capitalization grant. States that qualify for funding are responsible for administering their individual SRF programs, and communities of any size can apply for assistance. Loans are generally provided at below-market interest rates, saving communities money on interest over the long term. As communities repay the loans, the states’ funds are replenished, enabling them to make loans to other eligible drinking water and wastewater projects, and creating a continuing source of assistance for communities. See figure 1 for a description of the state Drinking Water and Clean Water SRF program funding sources. Nationwide, there are almost 52,000 publicly and privately owned drinking water systems and 16,000 publicly owned wastewater treatment facilities. USDA’s RUS administers a water and wastewater loan and grant program for rural communities with populations of 10,000 or less. The program is designed to address public health concerns in the nation’s rural areas by providing funding for new and improved drinking water and wastewater infrastructure. RUS provides a mix of loan and grant funding to communities that have been denied credit through normal commercial channels. Like the SRF programs, the RUS program makes loans at below-market rates to save communities interest over time but, unlike the SRF programs, the RUS program can make loans for up to 40 years, which helps lower communities’ annual repayment costs. In addition, communities do not need to repay funds received as grants, further helping to reduce the overall financial burden they incur upon a water project’s completion. To determine the amount of loans and grants a community receives, RUS assesses the potential increase in the water or sewer user rate needed to repay the loan. RUS provides grants to communities when necessary to reduce user rates to a level that the agency determines to be reasonable. Other federal agencies have programs that provide funds for drinking water and wastewater infrastructure, including HUD’s Community Development Block Grant program and the Department of Commerce’s Economic Development Administration’s Public Works and Economic Development Program. Under HUD’s program, communities use block grants for a broad range of activities to provide suitable housing in a safe living environment, including water and wastewater infrastructure. Thirty percent of block grant funds are allocated by formula to states for distribution to communities of 50,000 or less. Drinking water and wastewater needs compete with other public activities for funding and, according to HUD officials, account for about 10 percent of all block grant funds nationally. Economic Development Administration’s Public Works and Economic Development Program provides grants to small and disadvantaged communities to construct public facilities, including drinking water and wastewater infrastructure, to alleviate unemployment and underemployment in economically distressed areas. In addition, the U.S. Army Corp of Engineers and the Department of the Interior’s Bureau of Reclamation provide financial assistance for some large drinking water and wastewater projects, but these projects must be authorized by Congress prior to construction. In addition to these federal programs, some states have created their own programs to provide assistance for drinking water and wastewater infrastructure. For example, the North Carolina Rural Economic Development Center provides infrastructure loans for communities in the state’s rural counties. In Montana, the Treasure State Endowment Program provides grants to make drinking water and wastewater projects more affordable for the state’s communities. The state SRF programs and the RUS program each have their own application process through which communities can apply for funding, although the application processes generally include similar steps: (1) completing an application that asks for, among other things, basic demographic, legal, and financial information associated with the project; (2) developing a preliminary engineering report that provides basic design specifications and other technical information for the project; and (3) conducting an environmental analysis that considers the environmental effects of the proposed project and alternatives. The state agencies responsible for EPA’s SRF programs and USDA state offices review these documents, prioritize the projects based on agency-determined criteria, provide comments to communities on how their applications can be improved, and ultimately approve or reject the request for funding. Communities can choose to apply for funding to different federal and state programs at any stage during the process. In some cases, the SRF and RUS programs will work together to jointly fund the same project if the project is too large for one agency to fund, or if it will make the project more affordable for the community. If their requests are approved, communities design the projects, obtain construction bids, contract to build the projects, and are reimbursed by the funding agency. Communities usually hire a consulting engineer to develop the preliminary engineering reports and conduct the environmental analyses for a project. In addition, EPA and USDA pay for technical service providers that communities can use to help them understand and apply for their programs. Communities can also get assistance from local planning districts, which are voluntary associations of county and municipal governments that provide development assistance to their membership. A preliminary engineering report describes the proposed project, including its purpose, features of the proposed location, condition of any existing facilities, alternative approaches considered, design features, and costs. Figure 2 shows the application process and timeline that is generally followed for both EPA and RUS programs. The state SRF and RUS state-level programs review the likely environmental effects of projects they are considering funding using different levels of environmental analysis. These reviews occur either under the National Environmental Policy Act of 1969 (NEPA) for the RUS program, or for the SRF programs, under a state environmental review process similar to NEPA. EPA regulations define the necessary elements of these state “NEPA-like” reviews. Typically, a proposed water or wastewater project is subject to an environmental assessment or, in the rare case that the project is likely to significantly affect the environment, a more detailed environmental impact statement. If, however, the agency determines that activities of a proposed project fall within a category of activities the agency has determined has no significant environmental impact—a determination called a categorical exclusion—then the project applicant or the agency, as appropriate, generally does not have to prepare an environmental assessment or environmental impact statement. Because many community water and wastewater infrastructure projects either upgrade or replace existing infrastructure, projects rarely result in significant environmental impacts, and NEPA requirements can be satisfied through an environmental assessment or a categorical exclusion. In addition, in some cases, the funding agency may help complete the environmental analysis documents for a planned project. Our previous work has raised questions regarding sufficient coordination between drinking water and wastewater infrastructure funding programs, despite federal efforts to improve coordination at the state and local level. In December 2009, we reported that EPA, USDA, and other agencies that fund drinking water and wastewater infrastructure for rural communities along the U.S.-Mexico border, lacked coordinated policies and processes and did not efficiently coordinate their programs, priorities, or funding. Specifically, without efficient coordination, applicants faced significant administrative burdens that, in some cases, resulted in project delays because the programs required separate documentation to meet the same requirements and did not consistently coordinate in selecting projects. For example, an engineer in Texas told us that one community applying for funding had to pay $30,000 more in fees because the engineer had to complete two separate sets of engineering documentation for EPA and USDA. As we stated in our December 2009 report, the applicant could have saved these funds had EPA and USDA established uniform engineering requirements. To resolve such inefficiencies, we suggested Congress consider establishing an interagency mechanism, such as a task force, of federal agencies working in the border region. One of the responsibilities of this task force would be to work with state and local officials to develop standardized applications and environmental review and engineering documents, to the extent possible, for the federal and state agencies working in the border region. Similarly, our October 2005 report discusses collaboration and practices that federal and state agencies can engage in to enhance and sustain interagency collaboration. In the report, we define collaboration as any joint activity that is intended to produce more public value than could be produced when organizations act alone. According to the report, agencies can enhance and sustain interagency collaboration by engaging in one or more of the following practices: define and articulate a common outcome; establish mutually reinforcing or joint strategies; identify and address needs by leveraging resources; agree on roles and responsibilities; establish compatible policies, procedures, and other means to operate develop mechanisms to monitor, evaluate, and report on results; reinforce agency accountability through agency plans and reporting; and reinforce individual accountability for collaborative efforts through performance management systems. For a number of these practices, the report states that nonfederal partners, key clients, and stakeholders need to be involved in decision making. Additionally, a number of important factors, such as leadership, trust, and organizational culture, are necessary elements for a collaborative relationship. Consistent with the findings of our October 2005 report, the 1997 joint memorandum signed by EPA, USDA, and HUD encourages cooperation in developing strategic plans for each agency’s program and encourages cooperation among program managers at the state level to remove as many barriers as possible in program regulations or policy. In addition, the memorandum encourages the development of common practices across agencies, including regularly communicating and leveraging funds to make the most efficient use of available resources. Moreover, the memorandum encourages the signing agencies to prepare common documents, including one environmental analysis per project, that meet all the federal and state agencies’ requirements. This memorandum is similar to governmentwide NEPA regulations and various guidance issued by the Council on Environmental Quality, which emphasize the need for coordination among federal and state agencies on environmental and other requirements. Most recently, the council issued a March 2012 guidance that encourages federal agencies to cooperate with state, tribal, and local governments so that one document satisfies as many applicable environmental requirements as practicable. In addition, the guidance encourages federal agencies to enhance coordination under NEPA by designating a lead agency responsible for conducting an environmental analysis. Furthermore, according to the guidance, a federal agency preparing an environmental analysis should consider adopting another federal agency’s environmental analysis if it addresses the proposed action and meets the standards for an adequate analysis under NEPA and the adopting agency’s NEPA guidance. Drinking water and wastewater infrastructure funding is fragmented among the three programs we reviewed—EPA’s Drinking Water and Clean Water SRF programs and USDA’s RUS program. As a result, overlap can occur when communities with populations of 10,000 or less apply to one of the SRF programs and the RUS program. For the 54 projects we reviewed in the five states we visited, this overlap did not result in duplicate funding or funding for the same activities on the same project. Specifically, for 42 projects that we reviewed, the state SRF programs or the RUS program funded the projects individually, and for the remaining 12 projects that we reviewed, the state SRF and RUS programs each contributed a portion of the overall project cost because none of the programs could cover the full cost individually, according to community officials. However, we identified potentially duplicative efforts by communities to complete funding applications and related documents for both agencies. Overlap can occur among the state SRF and RUS programs because they can each direct funding to communities with populations of 10,000 or less. As a result, these communities are eligible to apply for funding from more than one of these programs. For example, communities of 10,000 or less can apply to the state Clean Water SRF and RUS programs for funds to install or upgrade wastewater treatment plants and sewer lines. In addition, communities of 10,000 or less can apply to the state Drinking Water SRF and RUS programs for funds to install, repair, improve, or expand treatment facilities, storage facilities, and pipelines to distribute drinking water. The state SRF and RUS programs have funded projects in communities with populations of less than 10,000 in recent years, according to our analysis of SRF and RUS data from July 1, 2007, through June 30, 2011. Specifically, over this time frame, communities with populations of 10,000 or less received $3.2 billion, or 36 percent of total Drinking Water SRF funding. Similarly, such communities received $6.3 billion, or 24 percent of total Clean Water SRF funding. In accordance with its mission, the RUS program has directed all of its funding for drinking water and wastewater infrastructure projects to such communities, for a total of $11 billion from October 1, 2006, through September 30, 2011. The amount of program funding overlap between the state SRF and RUS programs varies among the states, with some states showing greater overlap than others. State Drinking Water SRF program funding overlap with the RUS program ranged from 7 percent in Rhode Island to 93 percent in Virginia, and state Clean Water SRF program funding overlap with the RUS program ranged from 8 percent in California to 74 percent in Pennsylvania. Additional information about variations in program funding overlap is provided in appendix II. Overlap in program funding could lead agencies to fund the same project, resulting in the potential for duplication. However, for the state SRF and RUS programs, the majority of projects we reviewed in the five states were funded by either one of the SRF programs or the RUS program, in conjunction with other federal or state program funds, such as HUD’s Community Development Block Grant program, Montana’s Treasure State Endowment Program, and programs from the North Carolina Rural Economic Development Center. Table 1 shows the funding awards for community projects in states we visited. In the five states we visited— Colorado, Montana, North Carolina, Pennsylvania, and South Dakota—42 of the 54 projects we reviewed received funding from the SRF or RUS programs, in addition to other sources. In addition to the 42 projects that were separately funded by the state SRF or RUS programs, 12 projects we reviewed received funding from both the SRF and RUS programs (see table 2 for funding details). Our analysis of these projects showed the programs did not pay for the same activities with their funding, and according to state and community officials, the joint funding for a community’s project was beneficial and warranted. Specifically, according to federal, state, and community officials we interviewed, jointly funded projects tended to be relatively expensive projects that exceeded one or the other agency’s ability to fund independently or that needed additional funding to make the project affordable for community residents. Following are examples: Washington, Pennsylvania, population approximately 3,500, sought funding from both the Clean Water SRF and RUS programs, and other programs, for its nearly $21 million sewer project to install over 200,000 feet of sewer lines. The community initially sought funding from the Clean Water SRF program, but then decided to seek additional funding from the RUS program after realizing the project exceeded available funding from the SRF program, according to the consulting engineer the community used. The Clean Water SRF program provided $10.3 million, and the RUS program provided $5.5 million. Hertford, North Carolina, population approximately 2,200 sought funding from the Drinking Water SRF and RUS programs for its project to expand drinking water capacity by drilling wells, installing water supply lines, expanding the water treatment plant, and constructing an elevated storage tank. Similar to the Washington, Pennsylvania, project, community officials said that the Hertford project was too expensive for a single agency to fund. The Drinking Water SRF program provided $2.6 million toward the project, and the RUS program provided $772,000. Faulkton, South Dakota, population approximately 800, sought funding from the Drinking Water SRF, the RUS program, and the Community Development Block Grant program to replace water pipelines and install a water tower. The town applied to multiple programs to receive grants to help ensure that the project would be affordable to its residents. The Drinking Water SRF program provided a loan in the amount of $500,000 and immediately forgave the balance of the loan, effectively providing these funds at no cost to the community. The RUS program provided $2.1 million in funds to this project, including grant funds, which helped keep the project affordable. The Community Development Block Grant program provided approximately $519,000 in additional funds, and the community put forth $149,000. Program overlap among the state SRF and RUS programs can result in potential duplication of communities’ efforts to prepare funding applications and related documents, including preliminary engineering reports and environmental analyses, according to our analysis of project documents and interviews with engineers and community officials in the five states we visited. In these states, as with others, the state SRF and RUS programs require the communities to submit a preliminary engineering report and an environmental analysis as part of their loan applications. Preliminary engineering reports submitted by communities to the SRF and RUS programs contained many of the same components, but the format and the level of detail required varied. Table 3 shows the similar or common components included in these preliminary engineering reports of four projects we reviewed. We judgmentally selected an example from one community in each state that had at least one jointly funded project or that had applied to both programs for funding, and that prepared preliminary engineering reports. As table 3 shows, the preliminary engineering reports for both programs asked for similar information such as project location, community growth and population, existing facilities, alternative approaches to the project, and environmental and technical details of the project. The preliminary engineering reports prepared for the RUS program also included information on debt service and short-lived assets—those assets that have a planned life less than the repayment period of the loan—while the SRF engineering reports did not include such information. Engineers and community officials we interviewed in some states told us that they prepare separate preliminary engineering reports for each agency when a community applies for funding from both agencies, which can increase costs to the communities. Specifically, officials and engineers in some states told us the requirements for USDA’s RUS preliminary engineering report are generally more rigorous. They stated that these reports contain similar information but with different formats and levels of detail. Examples are as follows: In North Carolina, engineers and a technical service provider we interviewed told us that the state SRF and RUS formats for the preliminary engineering reports differed significantly in format but contained much of the same information. State officials told us the state SRF programs do not typically accept preliminary engineering reports completed for the state-level RUS program because they try to maintain a common format to enable efficient review. Similarly, the state-level RUS program officials said that they do not accept reports completed for the state SRF programs. In Colorado, an engineer for several projects we reviewed told us that the engineering firm had to complete preliminary engineering reports for both the state SRF programs and the RUS program even though the reports had similar formats and information. In South Dakota, engineers told us that to minimize effort, time, and cost to the community, they prepare preliminary engineering reports to meet state SRF, RUS, and other program requirements even if the community does not initially seek funds from all of these programs. These engineers said doing so helps minimize the additional effort it would take to revise the report at a later time if the community decided to seek additional funds. According to another engineer, if the preliminary engineering report is completed to meet just the SRF programs’ requirements, the firm will require additional time and money to meet the additional preliminary engineering report requirements necessary to apply for funding through the RUS program. Montana and Pennsylvania take a different approach than the other three states we visited as follows: Montana has a uniform preliminary engineering report accepted by most federal and state agencies. Engineers said that the agencies ask for some different information, which they gather in amendments to the report instead of having communities submit similar information multiple times. In Pennsylvania, officials from state SRF and state-level RUS programs said they encourage communities to apply to either the SRF or RUS programs and do not often jointly fund projects. Officials from both programs told us that when they do fund projects jointly, they try to accept one another’s documents to avoid duplicating them. We also found similarities in the environmental analyses submitted by communities to the SRF and RUS programs for four of the projects in the states we visited. According to our review of environmental analyses submitted to the state SRF and RUS programs—we judgmentally selected one in each of four communities and states that had jointly funded projects or applied to both programs for funding—each environmental analysis followed a similar overall format and contained many of the same components, but the level of analysis and the level of detail needed to satisfy federal and state requirements varied. Table 4 shows the overall format and similar components for these environmental analyses. The agencies ask for information on many of the same components, including purpose and need, alternatives analysis, and environmental consequences. The extent to which communities duplicate their environmental analyses for each program varies by state, depending on the extent to which water and wastewater infrastructure programs in the state accept each other’s work or use each other’s documents. In Colorado, North Carolina, and South Dakota, the communities can submit the final approved environmental analyses prepared for the RUS program to the SRF programs, which eliminates one of the documents they have to prepare. However, in these states, the state-level RUS program will not typically accept the analysis prepared for the SRF program because the state analyses are less rigorous, according to RUS officials. In Pennsylvania, the state programs have agreed to uniform environmental requirements, and the communities therefore submit the same document to both programs. Communities may be required to submit additional information, as needed, to meet requirements specific to each program. In Montana, the state SRF programs prepare an environmental analysis for the community that is primarily based on information that the community submits in the preliminary engineering report, but the community prepares the environmental analysis that it submits to the state RUS program. Furthermore, in some cases, the state programs may require the same type of environmental analysis for a project but, in other cases, the state programs may require different levels of environmental analysis—such as a categorical exclusion. For example, for a single wastewater project, the town of Conrad, Montana, completed an environmental analysis for the state-level RUS program, while the state SRF program completed the environmental analysis for the town. In contrast, Pagosa Springs, Colorado, submitted an environmental checklist to the state SRF program for its wastewater project and received a categorical exclusion but had to submit an environmental analysis for the application it submitted to the state-level RUS program for the same project. Variation exists across states despite NEPA regulations stating that federal agencies should eliminate duplication with state and local procedures by providing for joint preparation of environmental analyses or by adopting appropriate environmental analyses. According to state SRF officials, state-level RUS officials do not always accept state analyses because NEPA regulations under the RUS program are rigid and because some state RUS officials are not flexible in their interpretation of the requirements for environmental analyses. State RUS officials, however, told us that environmental analyses by some state environmental programs are not sufficient to meet federal NEPA standards, making it difficult for them to accept these environmental analyses. Potentially duplicative application requirements, including preliminary engineering reports and environmental analyses, may make it more costly and time-consuming for communities to complete the application process. For example, if consulting engineers have to provide similar, or even the same, information, in two different engineering reports or environmental analyses, their fees to the community may be higher. Engineers we interviewed estimated that preparing additional preliminary engineering work could cost anywhere from $5,000 to $50,000 and that the cost of an environmental analysis could add as little as $500 to a community’s costs or as much as $15,000. Moreover, having to complete separate preliminary engineering reports or environmental analyses may delay a project because of the additional time required to complete and submit these documents. State officials in Montana told us that coordination between federal and state programs and the implementation of uniform application requirements could reduce the time it takes an applicant to complete a rural water infrastructure project by up to half. Our review of five states and local communities in those states showed that EPA and USDA have taken some actions to coordinate their programs and funding at the federal and state level to help meet the water infrastructure needs of rural communities, but not others specified in the 1997 memorandum. Because these federal programs are implemented at the state level, efforts to coordinate between the agencies primarily occur among state officials managing the SRF and other water infrastructure programs, the RUS state-level offices, and the communities whose projects they fund. In some cases, inconsistent coordination at the state level has led to potential duplication for communities applying for funding and inefficiencies in program funding. EPA and USDA, at the federal level, and the state SRF and RUS state-level offices, have taken some actions to coordinate but have not taken others that could help avoid duplication of effort by communities applying for project funding. Recognizing the importance of coordinating the SRF and RUS programs at the state level, EPA and USDA agencies have taken some actions at the federal level to encourage coordination between the state-level programs and communities but not other actions specified in the 1997 memorandum. The 1997 joint memorandum signed by EPA and the USDA sought to improve coordination among federal and state agencies as they help fund community projects. It identified four major actions that state and state-level federal offices can take to improve coordination and reduce inefficiencies and potential duplication of effort. These actions are consistent with several of the leading practices we identified in our October 2005 report on interagency collaboration. These actions are as follows: Cooperate in preparing planning documents. The memorandum encourages state SRF and RUS programs to cooperate in preparing planning documents, including operating, intended use, and strategic plans that are required under each agency’s programs. The memorandum says that the federal and state programs should endeavor to incorporate portions of each agency’s planning documents to minimize duplication of planning efforts. This action is consistent with two leading practices for interagency collaboration identified in our previous work— defining and articulating common outcomes and developing joint strategies—through which partner agencies can overcome significant differences in agency missions and cultures, and align their activities and resources to accomplish common goals. Cooperate to remove policy and regulatory barriers. The memorandum states that agencies should cooperate in removing as many barriers to coordination as possible in program regulations or policy by, for example, coordinating project selection systems and funding cycles. This action is consistent with a leading practice for interagency collaboration identified in our previous work—promoting compatible policies and procedures. Cooperate on project funding. The joint memorandum encourages state SRF and state-level RUS officials to meet on a regular basis to cooperate in determining what projects will receive funding and which program should fund which project, and to discuss the possibility of jointly funding projects when necessary. This action is consistent with two of the leading practices for interagency collaboration identified in our previous work— agreeing upon roles and responsibilities and leveraging resources. Through such actions, federal and state agencies funding water and wastewater infrastructure can clarify which agencies will be responsible for taking various steps and for organizing joint and individual agency efforts and thereby obtain benefits that they would not have realized by working individually. Cooperate in preparing environmental analyses and meeting other common federal requirements. The joint memorandum states that, whenever possible, agencies should cooperate on federal requirements that are common across agencies—environmental analyses and other common documents, such as preliminary engineering reports—in order to create one comprehensive application package per project. This action is consistent with our leading practice for interagency collaboration of establishing compatible policies and procedures for operating across agency boundaries. Through such an action, federal and state agencies would seek to make policies and procedures more compatible. In February 2012, EPA, USDA, and several other federal and state agencies created a working group to examine the feasibility of developing uniform guidelines for preliminary engineering report requirements. The group plans to develop a draft outline for uniform preliminary engineering report guidelines by September 2012 and has received numerous examples and comments from participating states. According to RUS officials, however, once the draft outline is developed it must be reviewed by participating state and federal agencies before it is considered final, and the final outline could be delayed if agency review and response times are slow. In addition, EPA and USDA have taken action at the federal level to help the states coordinate better and make programs more efficient for communities applying for funding. Specifically, EPA and USDA coordinate at the federal level to encourage states to emphasize coordination between their SRF programs and RUS, as well as with local communities. According to EPA and USDA officials, to inform state officials and communities about the programs and funding opportunities available in their respective states, the federal agencies participate in conferences and workshops, conduct Webinars, and sponsor training. The federal agencies also issue guidance to their programs. For example, EPA issued a report in 2003 providing case studies and innovative approaches on how state SRF programs could better coordinate with other programs with similar purposes. In addition, in June 2011, EPA and USDA signed a Memorandum of Agreement to work together to help communities implement innovative strategies and tools to achieve short- and long-term water and wastewater infrastructure sustainability. Among other things, the memorandum encourages the agencies to share and distribute resources and tools to communities that promote long-term sustainability and to provide training and information that encourages the adoption and adaptation of effective water infrastructure management strategies. The actions that EPA and USDA have taken to date, such as providing guidance in the 1997 memorandum, have helped states and state-level federal agencies to coordinate generally but have not facilitated better coordination at the state level in more specific ways. In particular, the federal agencies have not taken actions, highlighted in the 1997 memorandum, to develop common documents for communities to apply to different funding programs. For example, EPA and USDA have not created a working group or taken similar action to work with other federal and state officials to develop a uniform environmental analysis. Making environmental analyses more compatible would be consistent with the March 2012 Council on Environmental Quality guidance on eliminating duplication in federal NEPA efforts. Similar to the 1997 joint memorandum, Council of Environmental Quality NEPA regulations and guidance encourage coordination between state and federal agencies in preparing environmental documents to reduce the time and cost required to make federal permitting and review decisions while improving outcomes for communities and the environment. According to agency officials, the agencies have not taken such action because they believe they have coordinated sufficiently. According to EPA officials, the states conduct NEPA-like analyses but are not required to meet the same NEPA requirements as federal agencies, and EPA cannot therefore dictate what documents the states use. In addition, USDA officials said that the RUS program’s NEPA guidance documents already encourage state-level RUS offices to coordinate with the state SRF programs to accept RUS’s environmental analyses, as appropriate and consistent with guidance from the Council on Environmental Quality. Without agreement to use common environmental analyses, however, rural communities could continue to spend more effort and resources to meet application requirements for improving their water and wastewater infrastructure. In the five states we visited, the state-level programs varied in the actions they took to coordinate their water and wastewater infrastructure programs consistent with the 1997 joint memorandum. In some states, the state SRF and RUS programs have developed innovative ways to coordinate and remove barriers to coordination consistent with the 1997 memorandum but, in other states, the state SRF and RUS programs have been less successful, leading to potential duplication for communities applying for funding and inefficiencies in program funding. Table 5 shows the extent of actions to coordinate taken by the state SRF programs and state-level RUS programs in the five states we visited. Some community officials we met with suggested that, for the drinking water and wastewater infrastructure programs, good coordination among state officials would involve meeting on a regular basis to cooperate in determining what projects would receive funding, thereby leveraging agency funds that are increasingly limited. In the five states we visited, the state SRF and state-level RUS programs varied in the number and types of action they had taken to coordinate, as described in the memorandum. However, the state-level programs did not take actions to cooperate in preparing planning documents. The extent of actions taken by the five states consistent with the memorandum are as follows: Cooperate in preparing planning documents. In the states we visited, state SRF and RUS programs do not regularly coordinate when developing agency-specific planning documents. State SRF officials identify the projects that apply to their program in planning documents called intended use plans. In these plans, the states rank projects using state-determined criteria following EPA guidance, such as environmental and health concerns. Similarly, state-level RUS officials develop funding plans in which they separately rank projects applying to their program using national criteria that focus primarily on economic development, as well as environmental and health concerns. Cooperate to remove policy and regulatory barriers. The state SRF and RUS programs in three of the states we visited had cooperated to remove policy barriers to coordination, such as differences in funding cycles. Specifically, in those states, federal and state officials meet regularly to ensure funding cycles are aligned to avoid unnecessary project delays. For example, in South Dakota, the state’s SRF and other state water and wastewater infrastructure funding programs have the same funding cycles and application timelines, which are administered by one agency. State and local officials told us that having the state funding programs aligned made it easier to navigate differences in funding cycles with RUS and other federal funding programs operating in the state. In addition, Montana officials created a working group to share information across state water and wastewater infrastructure programs and coordinate funding cycles. State and local officials in Montana said that regular coordination between federal and state officials on individual projects helped manage programmatic differences, such as differing funding cycles, to avoid lengthy delays in funding projects. Officials and engineers in both states said that the benefits of these joint efforts included reductions in community costs and administrative burdens for submitting applications and related documents, as well as reductions in the federal and state agencies’ time in reviewing the documents. Other states have not worked to remove policy and regulatory barriers to coordination. For example, state and local officials in North Carolina told us that differences in application processes and funding cycles for the federal and state programs, including state SRF programs and the RUS program, increased the complexity and cost of applying for funding. Multiple agencies in the state that fund drinking water and wastewater infrastructure projects, including the SRF programs, have different funding cycles, so that communities have to apply separately to each program and at different times to make the project affordable. State and local officials in Colorado told us that they faced similar barriers. Cooperate on project funding. Officials in all the states we visited meet at various times during the year, although some meet more frequently and discuss project funding in greater detail. Officials in Montana and South Dakota told us that they meet regularly to discuss upcoming projects, project applications, and coordination of funding, when possible. For example, officials from federal and state drinking water and wastewater funding programs in the Montana working group share information and discuss current projects and communities applying for funding. Community representatives said that state SRF program officials hold monthly meetings between the applicant and other state and federal funders to ensure that adequate funding is available to keep the project moving forward and to resolve any differences between the community and the federal and state programs providing funding. Similarly, in South Dakota, officials for the state SRF and RUS programs told us that they discuss project applications routinely and work closely with officials from local planning districts who, in turn, use their expertise working with federal and state programs to help communities apply for funding. In Pennsylvania, the state SRF and state-level RUS programs coordinate early in the application process by (1) conducting joint outreach sessions with communities interested in applying for drinking water and wastewater project funding and (2) directing communities to the program that better fits their needs, according to state officials we spoke with. State-level officials and engineers we spoke with identified improvements in the efficiency and effectiveness of the programs because the officials direct communities to the program that best fits their needs or provides the best opportunity for a successful application. Officials in Colorado and North Carolina also meet but do not regularly discuss project funding or the communities that have applied for funding, and said that they have experienced lapses in program efficiency and effectiveness, such as loss of federal funding for the state. Officials in both states told us coordination is complicated by communities not disclosing that they have applied to other state or federal programs for funding. Specifically, according to federal and state officials, in some cases, communities and the consulting engineers representing them will sign a funding agreement with either the state SRF or state-level RUS program but continue to seek additional grant or subsidized loan funding from other state and federal programs to get additional grant funding or better loan terms. State SRF and state-level RUS program officials in North Carolina and Colorado told us that not disclosing multiple funding sources can lead to inefficiencies when state SRF program officials and state-level RUS officials are unaware that a community has applied to both programs. Specifically, state-level officials who administer the RUS program in North Carolina and Colorado reported having to or expecting to deobligate a total of more than $20 million that they had committed to fully fund projects because they were unaware that the state SRF programs had committed to fully fund the same projects. The state-level RUS program in North Carolina expects to have to deobligate funding for three projects totaling about $4.9 million in loan and grant funding, and the RUS program in Colorado had to deobligate funding for seven projects totaling $15.6 million. The two RUS state offices could not meet internal agency deadlines to fully obligate their available funds and, as a result, had to return these funds to the RUS headquarters pool. State officials in North Carolina recently developed a uniform cover sheet for all state drinking water and wastewater funding program applications that asks communities to disclose other sources of funding. However, in our review of the uniform cover sheet, applicants are not asked to provide information on funding requested from RUS and other federal drinking water and wastewater funding programs. Cooperate in preparing environmental analyses documents and other common federal requirements. In our visits to Montana and Pennsylvania, we learned that federal and state programs, including the state SRF and RUS programs, have coordinated to streamline the application process in their states. For example, in Montana, these programs coordinated to develop uniform application materials and preliminary engineering report requirements that are accepted by all federal and state water and wastewater infrastructure programs in the state. Similarly, in Pennsylvania, program officials agreed upon uniform environmental analyses that are accepted by all programs, which reduce the cost and time for completing applications. Other states we visited have not agreed on uniform application requirements. According to federal and state officials in Colorado, North Carolina, and South Dakota, the state SRF and RUS programs have not developed documents with common formats and requirements for drinking water and wastewater infrastructure projects because of difficulty in integrating multiple program requirements. Specifically, state and local officials said that much of the information required in the environmental analyses was the same, but that agencies could not agree on a standard format and level of detail. For example, state SRF and RUS program officials in Montana told us they had tried, but were unable, to develop a uniform format for the presentation of their environmental analyses even though they had done so for their preliminary engineering reports. Furthermore, officials in Colorado and North Carolina expressed concern that having uniform documents that incorporated both state SRF and RUS program requirements would slow the application processes for all three programs and make them more costly. Specifically, officials administering both of the state SRF programs were concerned that, by adopting a format compatible with RUS policies and procedures, they would make the state SRF application process more onerous. Rural communities rely on federal grants and loans to meet their water and wastewater infrastructure needs and to keep their drinking water and sewer user rates affordable. It is therefore important to make the most efficient use of limited federal funds to help as many communities as possible and to eliminate potential duplication of effort by communities when they apply for funds. EPA and USDA recognized in a 1997 memorandum that it is necessary to more effectively and efficiently coordinate the SRF and RUS programs at the state level through four major actions: in preparing planning documents, removing policy and regulatory barriers, meeting regularly to discuss project funding, and preparing common environmental analyses and other common federal requirements. In addition, EPA and USDA have taken actions to encourage states to improve coordination over the past 15 years. Specifically, recent actions by EPA and USDA, such as their efforts to inform state officials and communities about the programs and funding opportunities by participating in conferences and workshops, conducting Webinars, and sponsoring training, as well as creating a working group to examine the possibility of developing guidelines to assist states in developing uniform preliminary engineering reports to meet requirements for federal and state programs, are encouraging and will help communities. However, the guidelines have not yet been completed, and EPA and USDA have not initiated a similar effort to develop guidelines for uniform environmental analyses that can be used to meet federal and state requirements. Without uniform documents, rural communities face a continuing burden and additional costs when applying for federal funds to improve their water and wastewater infrastructure. The state-level programs in the five states we reviewed varied in the number and types of actions they had taken to coordinate across the four key areas in the 1997 memorandum. Some state-level programs have developed innovative ways to coordinate and remove barriers to coordination, but in other states, the programs have been less successful, warranting stronger federal attention. Moreover, the state-level programs did not take actions to cooperate in preparing planning documents in any of the states. Until the state-level programs are regularly coordinating across the four key areas in the 1997 memorandum, including when developing planning documents, they will continue to risk potential program inefficiencies. Additional delays in taking actions to help improve such coordination could prevent EPA and USDA from more effectively and efficiently providing limited resources to needy communities. To improve coordination and to reduce the potential for inefficiencies and duplication of effort, we recommend that the Secretary of Agriculture and the Administrator of EPA take the following three actions: ensure the timely completion of the interagency effort to develop guidelines to assist states in developing their own uniform preliminary engineering reports to meet federal and state requirements; work together and with state and community officials to develop guidelines to assist states in developing uniform environmental analyses that could be used, to the extent appropriate, to meet state and federal requirements for water and wastewater infrastructure projects; and work together and with state and community officials through conferences and workshops, Webinars, and sponsored training to reemphasize the importance of coordinating in all four key areas in the 1997 memorandum. We provided EPA and USDA with a draft of this report for their review and comment, and both agencies provided written comments. EPA neither agreed nor disagreed with our first two recommendations but concurred with the third. USDA neither agreed nor disagreed with any of our recommendations. EPA’s comments are provided in appendix III and USDA’s comments are provided in appendix IV. Both agencies made technical comments that we incorporated as appropriate. In addition, we sent relevant portions of this report to state or federal officials responsible for administering the state SRF programs and state-level RUS programs for their review and technical comment. In its comments on our first recommendation, that the agencies complete their efforts to develop uniform requirements for preliminary engineering reports, EPA stated that it supported the intent of the recommendation but noted it does not have the authority to require states to adopt a required format and that some states may not utilize it. EPA recommended that we replace the word “requirements” with the word “format.” USDA also indicated that EPA and HUD have no authority to require state governments to use a particular preliminary engineering report outline and requested that we therefore change the word “requirements” to the word “guidelines.” We recognize and agree that states have discretion to develop their own requirements for their SRF programs. In making our recommendations, we did not intend to limit states’ discretion in adopting their own preliminary engineering report requirements. However, we continue to believe that the federal agencies could do more to help states identify common requirements for their own uniform preliminary engineering report documents. We changed our recommendation to reflect that the states do have discretion and that the federal agencies should develop guidelines to help the states develop uniform preliminary engineering report requirements. In its comments on our second recommendation, to develop uniform requirements for environmental analysis documents, EPA stated that in principle it agreed with our recommendation but said it is not realistic to develop a one-size-fits-all approach. EPA said that developing the “essential elements” for environmental analyses should achieve the same outcome and requested that we change the word “requirements” to “essential elements.” USDA stated that it did not necessarily disagree with the intent of the recommendation but noted that EPA has limited authority to dictate specific requirements to states implementing the SRF program. It also identified several procedural and policy hurdles including the fact that USDA’s NEPA requirements are typically more stringent than the reviews under the SRF programs. USDA stated that it would work with EPA to discuss the concept of unified reviews and identify what would be required to achieve such reviews. USDA suggested that the Council on Environmental Quality could be called on to facilitate a working group between federal water and wastewater infrastructure funding programs on NEPA implementation. In making our recommendation, we did not intend to limit states’ discretion in adopting their own requirements for environmental analyses. We changed the wording of our recommendation to clarify that the agencies would develop guidelines to assist states in developing common requirements for environmental analyses. We also note that USDA’s suggestion for the Council on Environmental Quality to facilitate a working group seems reasonable but did not make this part of our recommendations because we did not review the Council on Environmental Quality as part of our work. EPA concurred with our third recommendation, that the agencies work together and with state and community officials in all four key areas of the 1997 memorandum, while USDA neither agreed nor disagreed with the recommendation. EPA said that our report showed that little overlap existed between the programs but that state-level coordination should be encouraged more broadly. USDA said that it had no control over communities that choose to change funding sources to a state SRF program after accepting funding from the state-level RUS programs. We understand that communities have the discretion to change funding sources if better loan and grant terms are available, but strong coordination can help the agencies know when communities are applying to other programs and what other communities might need funding. Such coordination, envisioned in the 1997 memorandum, can avoid the loss of funds from states with high needs and other inefficiencies identified in this report. Furthermore, as EPA confirmed in its comments, state-level coordination can be encouraged more broadly to help other state and federal water and wastewater infrastructure funding programs better leverage limited state and federal funds. Finally, in its general comments on the draft report, USDA commented on GAO’s use of a relatively small sample of states for this review and that the RUS programs in those states were experiencing a transition in leadership and had not had time to develop relationships and learn other agencies’ programs. We selected states that had high rural water and wastewater infrastructure needs and a range of experience coordinating their water and wastewater infrastructure funding programs. We clearly state in the report that the sample is small and that our results cannot be generalized to all states. We recognize that the experience and trust established through long-term relationships is critical to the establishment of good coordination between federal and state programs. However, given the amount of time the memorandum has been in place, we believe that if good coordination between state SRF and state-level RUS programs had been established prior to the transition in state-level RUS leadership, it would have facilitated a smoother transition, and many of the challenges identified in our report may have been avoided. We will send copies of this report to the Administrator of EPA, the Secretary of Agriculture, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of this report examine (1) the potential for fragmentation, overlap, and duplication between the Environmental Protection Agency’s (EPA) Drinking Water and Clean Water State Revolving Fund (SRF) programs and the U.S. Department of Agriculture’s (USDA) Rural Utilities Service (RUS) Water and Wastewater Disposal program, both of which address water and wastewater infrastructure needs in rural communities, and (2) the extent to which these programs coordinate with each other at the federal and state level to help meet the water infrastructure needs of rural communities. We selected these programs for this review because they provided the highest amount of federal funds to water and wastewater infrastructure projects, which include projects in rural communities—defined for this report as communities with populations of 10,000 or less—in fiscal year 2011. The federal government has not established a formal or consistent definition of what constitutes a rural community; however, RUS defines a rural community as having a population of 10,000 or less. EPA, although it does not define communities as rural, gathers data on funding to communities of various sizes, including communities with populations of 10,000 or less. For both agencies, communities can include entities such as towns, cities, or counties, which make the decision whether to apply for funding from the programs. In some cases, regional water utilities or other utility associations can apply on behalf of a community or a group of communities. Using this definition allowed us to obtain and analyze similar data from both agencies. To address both objectives, we reviewed government reports, statutes, regulations, guidance, budgets, and other relevant documents to identify federal support for rural water infrastructure programs and specifically the support provided by the Clean Water SRF, Drinking Water SRF, and RUS programs. In addition, we interviewed officials from EPA and USDA and from relevant nonprofit organizations, including the environmental finance center at Boise State University and the Council of Infrastructure Financing Authorities to collect financial and other information on the extent of fragmentation, overlap, duplication, and coordination among these rural water funding programs, as well as the current challenges facing rural communities. We then selected a nongeneralizable sample of five states to visit—Colorado, Montana, North Carolina, Pennsylvania, and South Dakota—to review the extent of fragmentation, overlap, and duplication among the EPA and USDA programs and the extent of coordination among the programs at the state level. The information from this sample cannot be generalized to all states but provides illustrative examples of their experiences in applying for funding from the EPA and USDA programs. We conducted site visits to these states to observe federally funded projects, discuss the funding process, and discuss community experiences applying for funding from the EPA and USDA programs. In each state, we judgmentally selected a nongeneralizable sample of communities to visit and projects to observe by analyzing lists of water and wastewater infrastructure projects we obtained from state SRF and state-level RUS program officials, and obtaining recommendations from officials we interviewed. We used the lists of projects to identify communities and projects that had applied for or received funding from the state SRF and RUS programs, or both. We reviewed a total of 54 projects in a total of 31 communities across five states, all of which had experience in applying for funds for a drinking water or wastewater project, or both, from the SRF or RUS programs. As with the state sample, the information from the communities and projects we selected cannot be generalized to other communities and projects but provide illustrative examples. To address the first objective, we assessed fragmentation between the Clean Water SRF, Drinking Water SRF, and RUS programs by examining statutes, regulations, and guidance relevant to the programs. To determine overlap between the programs, we calculated the proportion of SRF funding that was allocated to communities with populations of 10,000 or less for state fiscal years 2007 through 2011 (state fiscal years generally start in July and end in June). We used data from EPA’s National Information Management System (NIMS), which collects and summarizes data on Clean Water and Drinking Water SRF program funding directed to communities of populations of all sizes, including communities with populations of 10,000 or less by states—the same size of communities toward which RUS directs its funding. We conducted interviews with EPA officials to assess the reliability of the NIMS data and found it reliable for our purposes of identifying state SRF funding for communities with populations of 10,000 or less. We compared this proportion of SRF funding with total RUS funding provided from USDA’s accounting system. We interviewed RUS officials about how these funding data are maintained and determined that it was reliable for our purposes of identifying USDA funding for communities with populations of 10,000 or less. To determine the potential for duplication at the project and activity level, we collected funding data for projects that had been funded by the state SRF programs, the state-level RUS programs, or both, as well as funding data from the communities we visited or whose officials we spoke with. In addition, we spoke with state SRF, state-level RUS, and community officials and consulting engineers to assess the extent to which projects were funded separately by state SRF or state-level RUS programs, or were jointly funded by these programs, and what activities were conducted. Duplication occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same recipients; however, in some instances, duplication may be warranted because of the magnitude or nature of the federal effort. Further, we collected and analyzed application materials—preliminary engineering reports and environmental analyses—from communities if the community had a project that was jointly funded by both the SRF and RUS programs or had applied to both programs for the same project. On the basis of this criterion, we obtained preliminary engineering reports for four projects in four states and environmental analyses for four projects in the same four states. To analyze the documents, we identified the components of each document and compared them with the others to determine those that were similar and different. We spoke with consulting engineers in those communities to determine whether the communities were required to submit separate documents with similar information to both programs. Because of the limited size of each sample, the results of our analysis are not generalizeable to all such documents. To address the second objective, we reviewed documents and initiatives, including a 1997 joint memorandum signed by EPA and USDA promoting better coordination between the state SRF and state-level RUS programs and interviewed headquarters officials at EPA and USDA to identify national efforts to encourage better coordination at the state level. To analyze whether EPA and USDA efforts and initiatives incorporated leading practices for interagency collaboration, we compared guidance in the 1997 memorandum with our prior work on practices that can help federal agencies enhance and sustain collaboration. In the states we visited, to determine how closely the state SRF and state-level RUS programs coordinate and whether their efforts to coordinate are consistent with the 1997 memorandum, we reviewed state-level guidance and documentation from state coordinating bodies and interviewed state- level SRF and RUS program officials, community officials, consulting engineers, and technical assistance providers. We identified actions taken by states that were consistent with actions identified in the 1997 memorandum and assessed whether these fulfilled the actions identified in the memorandum using “yes” to indicate the action was fully taken, “no” to indicate that it was not taken at all, and “partial” to indicate the action had not been fully taken. We selected the five states we visited using a multistep process and several sources of information: funding needs for rural areas; geographic location; and level of coordination between state and community partners. We first narrowed the number of states we could visit to 15 states by analyzing EPA and USDA data on funding needs. To do so, we determined the relative level of funding needed in each state using the following data, by state, for communities with populations of 10,000 or less: (1) per capita needs for drinking water infrastructure, (2) per capita needs for clean water infrastructure, (3) drinking water infrastructure needs as a percentage of total state drinking water needs, (4) clean water infrastructure needs as a percentage of total state clean water needs, (5) the number of backlogged RUS water and wastewater infrastructure project requests, and (6) the total amount of RUS loan and grant funding requested for the backlogged projects. We obtained and analyzed these six categories of data from EPA’s Drinking Water and Clean Water Needs Assessment reports, and USDA’s data on backlog of funding applications. To assess the reliability of EPA’s data, we reviewed the agency’s quality control efforts over the data. To assess the reliability of the USDA data, we interviewed RUS officials on how they obtained and verified the data. We determined that both sets of data were sufficiently reliable for our purposes of selecting a sample of states to visit. Because not all states had complete data, we created three groups of states for analysis: 35 states had full data, or data for all 6 categories; 11 states had partial data, or data for 4 of the 6 categories; and 4 states had mixed data that we determined was not sufficient to analyze. Because the amount of data varied for each group, we determined that we would sample from each group separately. Next, for the 35 states that provided complete data, we ranked the states from highest to lowest (numbering the highest 1 and so on) within each of the six categories, basing the ranking on either percentage or dollars, depending on the category. We then identified the top 10 states in each category, selected the 10 states that appeared in three or more of the six categories and added the scores across the six categories for each state. We then conducted a very similar process for the 11 states that had partial data, except that we identified the states with the top five highest values in each of the four categories of data and then selected the three states that appeared in at least three of the four categories. This parallel analysis gave us 10 states from the full data group and 3 states from the partial data group. We then selected 2 states from the third group of states, which had mixed data available, on the basis of their physical size and the fact that they had the most data available in the group. We further narrowed down the number of states we could visit using geographic dispersion as a criterion. We located the 15 states selected through our analysis of funding data in six Department of Census divisions and selected five that were ranked first according to the six categories. We also selected 2 states from the partial-data group and one state from the mixed-data group, for a total of 8 states. From the eight remaining states, we selected Colorado, Montana, North Carolina, Pennsylvania, and South Dakota to visit based on the extent of coordination among the state SRF and RUS programs and the communities they served. We called the state SRF and RUS state-level officials to discuss whether the programs met and how frequently they jointly funded projects. We considered the range of coordination in each of the eight states to judgmentally select the five states we visited. We conducted this performance audit from September 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 6 provides information on the percentages and amounts of funding provided, by state, through EPA’s Drinking Water and Clean Water SRF programs to communities with populations of 10,000 or less. In addition to the individual above, Susan Iott, Assistant Director; John Barrett; Elizabeth Beardsley; Mark Braza; Elizabeth Curda; Richard Johnson; Micah McMillan; Sara Ann Moessbauer; Dan Royer; Tina Sherman; Carol Herrnstadt Shulman; and Kiki Theodoropoulos made key contributions to this report. | Many rural communities with populations of 10,000 or less face challenges in financing the costs of replacing or upgrading aging and obsolete drinking water and wastewater infrastructure. EPA and USDA oversee the three largest federally funded drinking water and wastewater funding programs for these communities. In response to Pub. L. No. 111-139, which directs GAO to identify and report on duplicative goals or activities in the federal government, this report examines the (1) potential for fragmentation, overlap, and duplication between EPA and USDA drinking water and wastewater infrastructure programs and (2) extent to which these agencies coordinate at the federal and state level to fund community water infrastructure projects. GAO analyzed relevant laws and regulations and program data and documents. GAO also visited five states based on high rural funding needs and geographic location (Colorado, Montana, North Carolina, Pennsylvania, and South Dakota) to meet with federal, state, and community officials and visit projects. GAO recommends that EPA and USDA complete guidelines to help states develop uniform preliminary engineering reports, develop guidelines to help states develop uniform environmental analyses, and reemphasize the importance of statelevel coordination. EPA neither agreed nor disagreed with GAO's first two recommendations and concurred with the third. USDA neither agreed nor disagreed with the recommendations. Funding for rural water and wastewater infrastructure is fragmented across the three federal programs GAO reviewed, leading to program overlap and possible duplication of effort when communities apply for funding from these programs. The three federal water and wastewater infrastructure programs--the Environmental Protection Agency's (EPA) Drinking Water and Clean Water State Revolving Fund (SRF) programs and the U.S. Department of Agriculture's (USDA) Rural Utilities Service (RUS) Water and Waste Disposal program--have, in part, an overlapping purpose to fund projects in rural communities with populations of 10,000 or less. For the 54 projects GAO reviewed in the five states it visited, this overlap did not result in duplicate funding, that is funding for the same activities on the same projects. However, GAO identified the potential for communities to complete duplicate funding applications and related documents when applying for funding from both agencies. In particular, some communities have to prepare preliminary engineering reports and environmental analyses for each program. GAO's analysis showed--and community officials and their consulting engineers confirmed--that these reports usually contain similar information but have different formats and levels of detail. Completing separate engineering reports and environmental analyses is duplicative and can result in delays and increased costs to communities applying to both programs. EPA and USDA have taken some actions to coordinate their programs and funding at the federal and state levels to help meet the water infrastructure needs of rural communities, but GAO's review in five states showed that their efforts have not facilitated better coordination at the state level in more specific ways. EPA and USDA signed a joint memorandum in 1997 encouraging state-level programs and communities to coordinate in four key areas: program planning; policy and regulatory barriers; project funding; and environmental analyses and other common federal requirements. As of July 2012, EPA and USDA had taken action at the federal level to help the states coordinate better and make programs more efficient for communities applying for funding. For example, EPA and USDA had formed a working group to draft uniform guidelines for preliminary engineering report requirements, but this effort is not yet complete. However, the agencies have not taken action to help states develop uniform environmental analysis requirements, as called for in the 1997 memorandum. Without uniform requirements, communities face a continuing burden and cost of applying for federal and state funds to improve rural water and wastewater infrastructure. Coordination in the four key areas varied across the five states GAO visited. For example, state and federal officials in Montana created a drinking water and wastewater working group to coordinate project funding and to resolve regulatory barriers such as different funding cycles between the programs. In addition, state and federal officials in Pennsylvania coordinated to develop uniform environmental analysis requirements. However, in North Carolina and Colorado, state-level programs did not coordinate well initially about project funding, which resulted in the state-level programs planning to pay for the same projects. The programs were able to avoid paying for the same projects, but state-level RUS programs have or expect to deobligate almost $20 million committed to these projects and return the funding to USDA. Further delays in coordinating programs could prevent funds from reaching needy communities. GAO recommends that EPA and USDA complete guidelines to help states develop uniform preliminary engineering reports, develop guidelines to help states develop uniform environmental analyses, and reemphasize the importance of state-level coordination. EPA neither agreed nor disagreed with GAOs first two recommendations and concurred with the third. USDA neither agreed nor disagreed with the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since the inception of SBInet, we have reported on a range of issues regarding program design and implementation. For example, in October 2007, we testified that DHS had made some progress in implementing Project 28—the first segment of SBInet technology across the southwest border—but had fallen behind its planned schedule. In our February 2008 testimony, we noted that although DHS accepted Project 28 and was gathering lessons learned from the project, CBP officials responsible for the program said it did not fully meet their expectations and would not be replicated. We also reported issues with the system that remained unresolved. For example, the Border Patrol, a CBP component, reported that as of February 2008, problems remained with the resolution of cameras at distances over 5 kilometers, while expectations had been that the cameras would work at twice that distance. In our September 2008 testimony, we reported that CBP had initially planned to deploy SBInet technology along the southwest border by the end of 2008, but as of February 2008, this date had slipped to 2011 and that SBInet would have fewer capabilities than originally planned. In September 2009, we reported that SBInet technology capabilities had not yet been deployed and delays required the Border Patrol to rely on existing technology for securing the border, rather than using the newer SBInet technology planned to overcome the existing technology’s limitations. As of April 2010, SBInet’s promised technology capabilities are still not operational and delays continue to require Border Patrol to rely on existing technology for securing the border, rather than using the newer SBInet technology planned to overcome the existing technology’s limitations. When CBP initiated SBInet in 2006, it planned to complete SBInet deployment along the entire southwest border in fiscal year 2009, but by February 2009, the completion date had slipped to 2016. The first deployments of SBInet technology projects are to take place along 53 miles in the Tucson border sector, designated as Tus-1 and Ajo-1. As of April 7, 2010, the schedule for Tus-1 and Ajo-1 had slipped from the end of calendar year 2008 as planned in February 2008, and government acceptance of Tus-1 was expected in September 2010 and Ajo-1 in the fourth quarter of calendar year 2010. Limitations in the system’s ability to function as intended as well as concerns about the impact of placing towers and access roads in environmentally sensitive locations have contributed to these delays. Examples of these system limitations include continued instability of the cameras and mechanical problems with the radar at the tower, and issues with the sensitivity of the radar. As of January 2010, program officials stated that the program was working to address system limitations, such as modifications to the radar. As a result of the delays, Border Patrol agents continue to use existing technology that has limitations, such as performance shortfalls and maintenance issues. For example, on the southwest border, Border Patrol relies on existing equipment such as cameras mounted on towers that have intermittent problems, including signal loss. Border Patrol has procured and delivered some new technology to fill gaps or augment existing equipment. We have also been mandated to review CBP’s SBI expenditure plans, beginning with fiscal year 2007. In doing so, in February 2007, we reported that CBP’s initial expenditure plan lacked specificity on such things as planned activities and milestones, anticipated costs, staffing levels, and expected mission outcomes. We noted that this, coupled with the large cost and ambitious time frames, added risk to the program. At that time, we made several recommendations to address these deficiencies. These recommendations included one regarding the need for future expenditure plans to include explicit and measurable commitments relative to the capabilities, schedule, costs, and benefits associated with individual SBI program activities. Although DHS agreed with this recommendation, to date, it has not been fully implemented. In our June 2008 report on the fiscal year 2008 expenditure plan, we recommended that CBP ensure that future expenditure plans include an explicit description of how activities will further the objectives of SBI, as defined in the DHS Secure Border Strategic Plan, and how the plan allocates funding to the highest priority border security needs. DHS concurred with this recommendation and implemented it as part of the fiscal year 2009 expenditure plan. In reviewing the fiscal year 2008 and 2009 expenditure plans, we have reported that, although the plans improved from year to year, providing more detail and higher quality information than the year before; the plans did not fully satisfy all the conditions set out by law. In addition to monitoring program implementation and reviewing expenditure plans, we have also examined acquisition weaknesses that increased the risk that the system would not perform as intended, take longer to deliver than necessary, and cost more than it should. In particular, we reported in September 2008 that important aspects of SBInet were ambiguous and in a continued state of flux, making it unclear and uncertain what technological capabilities were to be delivered and when. Further, we reported at that time that SBInet requirements had not been effectively developed and managed and that testing was not being effectively managed. Accordingly, we concluded that the program was a risky endeavor, and we made a number of recommendations for strengthening the program’s chances of success. DHS largely agreed with these recommendations and we have ongoing work that will report on the status of DHS’s efforts to implement them. We reported in January 2010 that key aspects of ongoing qualification testing had not been properly planned and executed. For example, while DHS’s testing approach appropriately consisted of a series of test events, many of the test plans and procedures were not defined in accordance with relevant guidance, and over 70 percent of the approved test procedures had to be rewritten during execution because the procedures were not adequate. Among these changes were ones that appeared to have been made to pass the test rather than to qualify the system. We also reported at this time that the number of new system defects identified over a 17 month period while testing was underway was generally increasing faster than the number of defects being fixed—a trend that is not indicative of a maturing system that is ready for acceptance and deployment. Compounding this trend was the fact that the full magnitude of this issue was unclear because these defects were not all being assigned priorities based on severity. Accordingly, we made additional recommendations and DHS largely agreed with them and has efforts underway to address them. Most recently, we concluded a review of SBInet that addresses the extent to which DHS has defined the scope of its proposed SBInet solution, demonstrated the cost effectiveness of this solution, developed a reliable schedule for implementing the solution, employed acquisition management disciplines, and addressed the recommendations in our September 2008 report. Although we plan to report on the results of this review later this month, we briefed DHS on our findings in December 2009, and provided DHS with a draft of this report, including conclusions and recommendations in March 2010. Among other things, these recommendations provide a framework for how the program should proceed. In light of program shortcomings, continued delays, questions surrounding SBInet’s viability, and the program’s high cost vis-à-vis other alternatives, in January 2010, the Secretary of Homeland Security ordered a department assessment of the SBI program. In addition, on March 16, 2010, the Secretary froze fiscal year 2010 funding for any work on SBInet beyond Tus-1 and Ajo-1 until the assessment is completed and the Secretary reallocated $50 million of the American Recovery and Reinvestment Act funds allocated to SBInet to procure alternative tested and commercially available technologies, such as mobile radios, to be used along the border. In March 2010, the SBI Executive Director stated that the department’s assessment ordered in January 2010, would consist of a comprehensive and science-based assessment of alternatives intended to determine if there are alternatives to SBInet that may more efficiently, effectively and economically meet U.S. border security needs. According to the SBI Executive Director, if the assessment suggests that the SBInet capabilities are worth the cost, DHS will extend its deployment to sites beyond Tus-1 and Ajo-1. However, if the assessment suggests that alternative technology options represent the best balance of capability and cost-effectiveness, DHS intends to immediately begin redirecting resources currently allocated for border security efforts to these stronger options. As part of our continuing support to the Congress in overseeing the SBI program, we are currently reviewing DHS’s expenditure plan for the fiscal year 2010 Border Security Fencing, Infrastructure, and Technology appropriation, which provides funding for the SBI program. Additionally, we are completing a review of the internal control procedures in place to ensure that payments to SBInet’s prime contractor were proper and in compliance with selected key contract terms and conditions. Finally, we are reviewing controls for managing and overseeing the SBInet prime contractor, including efforts to monitor the prime contractor’s progress in meeting cost and schedule expectations. We expect to report on the results of these reviews later this year. In addition to monitoring SBInet implementation, we also reported on the tactical infrastructure component of the SBI program. For example, in October 2007, we reported that tactical infrastructure deployment along the southwest border was on schedule, but meeting CBP’s fencing goal by December 31, 2008, might be challenging and more costly than planned. In September 2008, we also reported that the deployment of fencing was ongoing, but costs were increasing, the life-cycle cost for fencing was not yet known, and finishing the planned number of miles by December 31, 2008 would be challenging. We also reported on continuing cost increases and delays with respect to deploying tactical infrastructure. In September 2009, we reported, among other things, that delays co ntinued in completing planned tactical infrastructure primarily because of challenges in acquiring the necessary property rights from landowners. GAO, Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. (Washington, D.C.: Sept. 9, 2009). of deployment and operations and future maintenance costs for the fence, roads, and lighting, among other things, are estimated at about $6.5 billion. CBP reported that tactical infrastructure, coupled with additional trained agents, had increased the miles of the southwest border under control, but despite a $2.6 billion investment, it cannot account separately for the impact of tactical infrastructure. CBP measures miles of tactical infrastructure constructed and has completed analyses intended to show where fencing is more appropriate than other alternatives, such as more personnel, but these analyses were based primarily on the judgment of senior Border Patrol agents. Leading practices suggest that a program evaluation would complement those efforts. Until CBP determines the contribution of tactical infrastructure to border security, it is not positioned to address the impact of this investment. In our September 2009 report, we recommended that to improve the quality of information available to allocate resources and determine tactical infrastructure’s contribution to effective control of the border, the Commissioner of CBP conduct a cost-effective evaluation of the impact of tactical infrastructure on effective control of the border. DHS concurred with our recommendation and described actions recently completed, underway, and planned that it said will address our recommendation. In April 2010, SBI officials told us that the Homeland Security Institute was conducting an analysis of the impact of tactical infrastructure on border security. We believe that this effort would be consistent with our recommendation, further complement performance management initiatives, and be useful to inform resource decision making. This concludes my statement for the record. For further information on this statement, please contact Richard M. Stana at (202) 512-8777 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, Frances Cook, Katherine Davis, Jeanette Espinola, Dan Gordon, Kaelin Kuhn, Jeremy Manion, Taylor Matheson, Jamelyn Payan, Susan Quinlan, Jonathan Smith, Sushmita Srikanth, and Juan Tapia-Videla made key contributions to this statement. Secure Border Initiative: Testing and Problem Resolution Challenges Put Delivery of Technology Program at Risk. GAO-10-511T. Washington, D.C.: Mar. 18, 2010. Secure Border Initiative: DHS Needs to Address Testing and Performance Limitations that Place Key Technology Program at Risk. GAO-10-158. Washington, D.C.: Jan. 29, 2010. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-1013T. Washington, D.C.: Sept. 17, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: Sept. 9, 2009. U.S. Customs and Border Protection’s Secure Border Initiative Fiscal Year 2009 Expenditure Plan. GAO-09-274R. Washington, D.C.: Apr. 30, 2009. Secure Border Initiative Fence Construction Costs. GAO-09-244R. Washington, D.C.: Jan. 29, 2009. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1086. Washington, D.C.: Sept. 22, 2008. Secure Border Initiative: DHS Needs to Address Significant Risks in Delivering Key Technology Investment. GAO-08-1148T. Washington, D.C.: Sept. 10, 2008. Secure Border Initiative: Observations on Deployment Challenges. GAO-08-1141T. Washington, D.C.: Sept. 10, 2008. Secure Border Initiative: Fiscal Year 2008 Expenditure Plan Shows Improvement, but Deficiencies Limit Congressional Oversight and DHS Accountability. GAO-08-739R. Washington, D.C.: June 26, 2008. Department of Homeland Security: Better Planning and Oversight Needed to Improve Complex Service Acquisition Outcomes. GAO-08-765T. Washington, D.C.: May 8, 2008. Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions GAO-08-263. Washington, D.C.: Apr. 22, 2008. Secure Border Initiative: Observations on the Importance of Applying Lessons Learned to Future Projects. GAO-08-508T. Washington, D.C.: Feb. 27, 2008. Secure Border Initiative: Observations on Selected Aspects of SBInet Program Implementation. GAO-08-131T. Washington, D.C.: Oct. 24, 2007. Secure Border Initiative: SBInet Planning and Management Improvements Needed to Control Risks. GAO-07-504T. Washington, D.C.: Feb. 27, 2007. Secure Border Initiative: SBInet Expenditure Plan Needs to Better Support Oversight and Accountability. GAO-07-309. Washington, D.C.: Feb. 15, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Securing the nation's borders from illegal entry of aliens and contraband, including terrorists and weapons of mass destruction, continues to be a major challenge. In November 2005, the Department of Homeland Security (DHS) announced the launch of the Secure Border Initiative (SBI)--a multiyear, multibillion dollar program aimed at securing U.S. borders and reducing illegal immigration. Within DHS, the U.S. Customs and Border Protection (CBP) provides agents and officers to support SBI. As requested, this statement summarizes (1) the findings and recommendations of GAO's reports on SBI's technology, known as SBInet (including such things as cameras and radars), and DHS's recent actions on SBInet; and (2) the findings and recommendations of GAO's reports on tactical infrastructure, such as fencing, and the extent to which CBP has deployed tactical infrastructure and assessed its operational impact. This statement is based on products issued from 2007 through 2010, with selected updates as of April 2010. To conduct these updates, GAO reviewed program schedules, status reports and funding and interviewed DHS officials. Since the inception of SBInet, GAO has reported on a range of issues regarding design and implementation, including program challenges, management weaknesses, and cost, schedule, and performance risks; DHS has largely concurred with GAO's recommendations and has started to take some action to address them. For example, in October 2007, GAO testified that the project involving the first segment of SBInet technology across the southwest border had fallen behind its planned schedule. In a September 2008 testimony, GAO reported that CBP plans to initially deploy SBInet technology along the southwest border had slipped from the end of 2008 to 2011 and that SBInet would have fewer capabilities than originally planned. As of April 2010, SBInet's promised capabilities were still not operational. Limitations in the system's ability to function have contributed to delays. GAO has also reviewed CBP expenditure plans and found a lack of specificity on such things as planned activities and milestones. GAO made recommendations, including the need for future expenditure plans to include explicit and measurable commitments relative to the capabilities, schedule, costs, and benefits associated with individual SBI program activities. While DHS has concurred with GAO's recommendations, and its expenditure plans have improved from year to year in detail and quality, the plans, including the one for fiscal year 2009, did not fully satisfy the conditions set out by law. Further, in September 2008, GAO made recommendations to address SBInet technological capabilities that were ambiguous or in a state of flux. DHS generally concurred with them. In January 2010, GAO reported that the number of new system defects identified over an 17 month period while testing was underway was generally increasing faster than the number of defects being fixed, not indicative of a maturing system. Given the program's shortcomings, in January 2010, the Secretary of Homeland Security ordered an assessment of the program, and in March 2010, the Secretary froze a portion of the program's fiscal year 2010 funding. GAO plans to report in May 2010 on the SBInet solution and the status of its September 2008 recommendations. CBP has completed deploying most of its planned tactical infrastructure and has begun efforts to measure its impact on border security, in response to a GAO recommendation. As of April 2010, CBP had completed 646 of the 652 miles of fencing it committed to deploy along the southwest border. CBP plans to have the remaining 6 miles of this baseline completed by December 2010. CBP reported that tactical infrastructure, coupled with additional trained agents, had increased the miles of the southwest border under control, but despite a $2.6 billion investment, it cannot account separately for the impact of tactical infrastructure. In a September 2009 report, GAO recommended that to improve the quality of information available to allocate resources and determine tactical infrastructure's contribution to effective control of the border, the Commissioner of CBP conduct a cost-effective evaluation of the impact of tactical infrastructure. DHS concurred with our recommendation and, in April 2010, told GAO that the Homeland Security Institute had undertaken this analysis. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Various organizations across DOD perform functions related to the recruiting, accessions, and training of active-duty enlistees as shown in figure 1. Enlistment processing and qualification determinations for age, citizenship, education, dependency status, and moral character are made by each military service. Their respective recruiting entities conduct preliminary screening of applicants to determine if they meet overall and medical DOD enlistment requirements. Among other things, recruiters also, for example, conduct a background review to screen an applicant for potentially disqualifying moral factors, review the applicant’s education credentials, and assist in completing a medical history report. When an applicant answers any question on the preliminary medical screening form affirmatively, they are expected to obtain, or authorize others within DOD, which could be their recruiter who is assisting them, to obtain additional documentation regarding that medical condition to include with the medical prescreen questionnaire. Once completed, recruiters forward the medical prescreening report and any other documentation collected to military service recruiting liaisons located at a MEPS location to schedule the applicant for further review prior to scheduling the applicant for a medical examination. USMEPCOM officials perform various functions, which include verifying personal identity; performing medical exams; documenting, reviewing, and updating applicant medical history; determining the extent to which applicants meet DOD’s medical qualification standards; supporting the military service medical waiver review process; administering the Armed Services Vocational Aptitude Battery and special purpose tests; conducting pre-enlistment interviews; conducting the oath of enlistment; and verifying signed enlistment contracts. The locations of the MEPS are displayed in figure 2. Each MEPS location is staffed with military and civilian personnel, including a chief medical officer, with additional medical personnel and recruiting liaisons representing each military service. MEPS medical personnel collect blood and urine specimens to send for Human Immunodeficiency Virus (HIV) and drug testing and examine applicants in physical and behavioral health areas in accordance with DOD’s medical qualification standards for enlistment. Finally, a MEPS physician will make a final determination as to whether an applicant does or does not meet accession medical standards based on applicant’s medical history, a physical examination, and test results. USMEPCOM designated physicians are the DOD medical authority for applicants processing with USMEPCOM for determining if an applicant medically meets the requirements of Title 10 to be qualified, effective, and able-bodied prior to enlistment. For those found to have disqualifying conditions, the MEPS physician will recommend for or against pursuing a medical waiver to the military services’ medical waiver authorities, who are authorized to grant medical waivers. Only applicants who are medically qualified are allowed to go to basic training. Some enlistees leave for basic training from their home towns and some return to the MEPS to undergo a brief follow-up physical inspection to determine whether they continue to meet the medical qualification standards for military service. For information on selected DOD and military service instructions, policies and guidance regarding medical screening of applicants, see appendix II. New enlistee basic training varies between 7 to 12 weeks depending on the military service. The Air Force basic training program lasts 7.5 weeks and is given at one training site located at Joint Base San Antonio- Lackland in San Antonio, Texas. Navy recruits remain in basic training for approximately 7 weeks at one training site, located at the Naval Station Great Lakes in North Chicago, Illinois. The Marine Corps’ basic training is 12 weeks and recruits are trained in San Diego, California, or Parris Island, South Carolina. The Army’s basic training is 10 weeks and recruits are trained at Fort Benning, Georgia; Fort Jackson, South Carolina; Fort Sill, Oklahoma; or Fort Leonard Wood, Missouri. After completing basic training, most enlistees complete follow-on training in technical skills, though the length of such training can vary widely by military service from a few weeks to a year or more. Figure 3 summarizes the most common recruiting, screening, and training process for new enlistees. Based on our analysis of DOD accession and attrition data, early attrition rates due to medical reasons during an enlistee’s initial term of commitment were generally stable for fiscal years 2005 through 2015. Although there were some increases and some decreases across the years for each of the time intervals we assessed, these changes were relatively small, with an average change of just over 1 percentage point. Figure 4 shows the estimated cumulative medical early attrition rates at the 6-month, 12-month, 24-month, 48-month, and 72-month points for servicemembers who separated prior to fulfilling their first term of commitment by accession year cohorts for fiscal years 2005 through 2015. For example, the medical early attrition rate at the 48-month point of enlistees’ initial term of commitment was an estimated 14.9 percent in fiscal year 2005 and an estimated 13.7 percent in fiscal year 2011—the most recent year for which 48 months of data were available—with fluctuations between these years. Additionally, the medical early attrition rate at the 6-month point of enlistees’ initial term of commitment was an estimated 5.2 percent in fiscal year 2005 and 3.6 percent in fiscal year 2015, with fluctuations between these years. Based on our analysis of DOD separation categories that were explicitly of a medical nature, we identified the leading categories of early attrition due to medical reasons, as shown in figure 5. According to this analysis, the leading category of early attrition due to medical reasons is “Unqualified for active duty, other” which DOD defines as a nondisability medical condition, such as obesity, motion sickness or allergies, that interferes with the performance of duties and contributes to the failure to meet physical readiness standards. Other leading categories of early attrition due to medical reasons include drug abuse, disability with severance pay, and failure to meet weight or body fat standards. Military service officials stated they have taken numerous steps to decrease early attrition due to medical reasons by taking steps to help improve the new enlistees’ physical and mental condition while in basic training. For example, Army officials at Fort Benning stated that they are piloting a program called the Initial Entry Training Physical Resiliency Enhancement Program. This program trains enlistees who may be prone to injury for 3-5 weeks before shipping them to basic training in an attempt to improve physical fitness and reduce injuries. Air Force officials at Joint Base San Antonio-Lackland stated that they are piloting a program to embed sports medicine experts throughout basic training to identify poor physical fitness practices and intervene before injuries occur. Moreover, Air Force officials also are piloting the use of a questionnaire called the Lackland Behavioral Questionnaire. Under this pilot, Air Force enlistees complete a questionnaire with over 70 questions on mental and behavioral health in an attempt to identify recruits with potential mental or behavioral issues so they can be interviewed by medical professionals and provided any necessary help or counseling early. Marine Corps officials stated that they place enlistees who fail the Marine Corps’ initial strength test into a physical conditioning platoon for further physical conditioning before they begin basic training to help improve physical fitness and reduce injuries. Further, a Navy official stated that a specialized machine is used to measure enlistees’ feet to select the proper shoes in an attempt to reduce injuries. For comparison purposes, we also analyzed overall early attrition rates during enlistees’ initial terms of commitment. Analyzing DOD accession and attrition data, we found that, similar to early attrition rates for medical reasons, overall early attrition rates during enlistees’ initial terms of commitment were generally stable for fiscal years 2005 through 2015. Although there were some increases and some decreases across years for each of the time intervals we assessed, these changes were relatively small, with an average change 2 percentage points. Figure 6 shows the estimated cumulative overall early attrition rates at the 6-month, 12- month, 24-month, 48-month, and 72-month points of enlistees’ initial terms of commitment by accession year cohorts for fiscal years 2005 through 2015. For example, the overall early attrition rate at the 48-month point of enlistees’ initial terms of commitment was an estimated 29.9 percent in fiscal year 2005 and an estimated 26.9 percent in fiscal year 2011—the most recent year for which 48 months of data were available— with fluctuations between these years. Additionally, the 6-month overall early attrition rate was an estimated 10.9 percent in fiscal year 2005 and an estimated 10.2 percent in fiscal year 2015, with fluctuations between these years. Our analysis of DOD separation categories for overall early attrition indicated that, as with our analysis of early attrition due to medical reasons, the leading category of overall early attrition is again “Unqualified for active duty, other.” Other leading categories of overall early attrition include drug abuse, poor entry level performance and conduct, and commissions of serious offenses. Figure 7 shows the reported leading categories of overall early attrition for enlistees by accession year for fiscal year 2005 through fiscal year 2015. USMEPCOM does not fully obtain, analyze, or use information for early attrition due to medical reasons within enlistees’ first 180 days of service. This is because DOD does not have a process for the military services’ training bases to provide USMEPCOM all of the medical records of enlistees who separate early due to medical reasons. Additionally, the database that USMEPCOM uses to perform complete statistical analyses on the early separation medical records it does receive is inoperable, impacting its ability to conduct such analyses. Finally, USMEPCOM does not use the information from these medical records to provide regular and specific feedback regarding early separations to MEPS medical personnel to improve the quality of applicant screening. A 2001 memorandum from the Assistant Secretary of Defense for Force Management Policy requests that basic training bases send medical records of enlistees who separated within the first 180 days of their military career for disqualifying medical conditions determined to have existed before the enlistee began military service (separations commonly known as Existed Prior to Service or EPTS discharges) to USMEPCOM for review and analysis. However, not all basic training bases provide medical records in accordance with the memorandum. In fact, for fiscal year 2015, the latest full fiscal year of data available, USMEPCOM reported receiving medical records for only 2,017 of 8,592 EPTS separations, a rate of 23 percent. USMEPCOM officials and basic training site officials we met with provided four reasons as to why all EPTS medical records are not provided to USMEPCOM. First, USMEPCOM officials stated that no uniform, standardized process to collect the necessary documentation from basic training sites has been established by any higher level headquarters. Standards for Internal Control in the Federal Government state that management, in order to achieve its objectives, should design control activities to achieve objectives and respond to risks. Moreover, the standards state that management should document—in policies for each unit—its responsibilities for all operational processes. Management should also review related policies, procedures, and related control activities for continued relevance and effectiveness in achieving the entity’s objectives. Officials stated that no specific process or instructions have been developed that clearly lay out the roles and responsibilities of USMEPCOM and the military services as well as the specific information that the services should provide to USMEPCOM. As such, inconsistent information is sent to USMEPCOM, and in some cases, certain medical records may not be sent at all, making analysis difficult, if not impossible. For example, USMEPCOM officials told us that Air Force officials were not sending medical records for psychological EPTS separations and were told it was due to Air Force officials’ confusion as to whether these cases were considered medical separations because of their interpretation of DOD separation classifications and coding. Second, language used in the 2001 memorandum has led to some confusion by the military services as to whether the memorandum is simply a request or a requirement for them to send medical records of enlistees with EPTS separations at their training bases. Third, military service officials stated that since the memorandum was very old, they were unsure if it was still active or who was specifically responsible at the training bases for sending the medical records to USMEPCOM— personnel officials or medical professionals. Fourth, USMEPCOM officials stated that some military service officials cited Privacy Act and concerns regarding their responsibilities for handling personal health information and the ability of USMEPCOM to safeguard the information adequately as reasons for their failure to send USMEPCOM the medical records for EPTS separations. USMEPCOM officials acknowledge the reluctance to send medical records because of the personal health information contained. As such, they believe the use of electronic health records would be useful in providing stronger safeguards that they believe should mitigate concerns about the handling of enlistees’ medical records to USMEPCOM. For example, recent discussions between USMEPCOM and Navy training base officials have led the Navy to draft a memorandum of understanding to clarify responsibilities regarding sending EPTS records and handling medical information. As of April 2017, the Navy had not completed this memorandum of understanding with USMEPCOM, and USMEPCOM was still not receiving the Navy EPTS separation medical records. DOD is planning to reissue a DOD instruction in July 2017 with changes that would require basic training sites to forward EPTS medical information to USMEPCOM, according to DOD officials. We obtained and reviewed the draft instruction and noted that it does require the submission of the medical records to USMEPCOM, but the draft instruction does not contain specific instructions aimed at addressing many of the reasons USMEPCOM and military service officials gave for the current failure to provide records. Specifically, the draft instruction does not identify a clear process with defined roles and responsibilities. As a result, we were unable to determine if the draft instruction will resolve any of the issues noted above, other than eliminating doubts about the currency and mandatory nature of the direction to provide the records to USMEPCOM. Without a clear process with defined roles and responsibilities, USMEPCOM may continue to not receive the majority of EPTS medical records for its review and analysis. USMEPCOM has been unable to fully analyze the medical records it does receive because its internal EPTS database has been nonoperational since September 2015 due to technical issues. Prior to September 2015, USMEPCOM officials told us that they scanned the paper medical records for EPTS separations that they received from basic training bases into this internal database. This allowed them to use information from these medical records to analyze specific data points, such as the cause of the separation and the MEPS location where the enlistee was medically qualified. This analysis allowed them to examine trends over time, such as a trend in separations due to errors related to a specific medical diagnosis or trends in processing errors to gain insight into problem areas. Without an operational EPTS database, USMEPCOM officials said that they can only conduct a limited, manual analysis of EPTS separation medical records, and are not able to fully analyze and utilize this information. Additionally, officials stated that the dependence on hard-copy medical records requires a large amount of resources to perform the manual analysis of these records and incurs large administrative costs associated with organizing medical records as they arrive, scanning them, manually coding results, and then repackaging the records. USMEPCOM officials stated that they are currently in the process of repairing the database, but they could not provide a schedule for its completion. We have previously reported that having a well-planned schedule is a fundamental management tool. In addition, the 2001 memorandum we previously discussed states that the findings from EPTS analysis form an important part of USMEPCOM’s quality control process, particularly in identifying and correcting physician errors and reducing the number of erroneous enlistments. Analyzing these particular separations provides insight into medical conditions that were not detected during the medical qualification process at a MEPS, allows USMEPCOM officials to identify trends in errors related to specific diagnoses, and provides information to improve the medical qualification process. According to USMEPCOM’s analysis of the limited number of EPTS medical records it received for fiscal year 2015, 47 percent of all such separations occurred due to enlistees concealing their medical history and 29 percent occurred due to the enlistee being unaware of the medical condition. For the same year, only 3 percent of such separations occurred due to an error on the part of MEPS personnel during the medical qualification process. Given the importance of the database to the analyses that USMEPCOM conducts, as long as it is unavailable, USMEPCOM will be hampered in its ability to conduct these analyses. The lack of a schedule for implementing the repairs to the database raises concerns about the timeliness of these repairs. As we have previously noted, a well-planned schedule is important for ensuring that projects, such as the database repair, are completed on time. Without a schedule for these database repairs, USMEPCOM has limited assurance that this tool will be available to it expeditiously. USMEPCOM’s regulation for its Medical Qualification Program states that USMEPCOM will provide feedback to MEPS chief medical officers regarding EPTS separations, but the feedback that USMEPCOM gives them is limited due to the small number of EPTS separation medical records received from the training bases and the partial manual analysis being done on those received. Standards for Internal Control in the Federal Government states that management, in order to achieve its objectives, should use quality information by identifying information requirements, obtaining relevant data from reliable internal and external sources in a timely manner, and processing the obtained data into quality information. However, USMEPCOM officials stated individual feedback regarding EPTS separations has been limited to cases in which USMEPCOM alerted chief medical officers to an obvious error at their MEPS location during the medical qualification process. For example, two chief medical officers we contacted reported receiving feedback at least once regarding EPTS separations that were classified as “MEPS errors.” While MEPS chief medical officers do receive feedback through other methods, none of these methods are individually tailored to the performance of MEPS chief medical officers as it relates to EPTS separations. Specifically, USMEPCOM officials stated that feedback regarding EPTS separations occurs during more general forums such as the annual MEPS chief medical officer conference or during monthly conference calls where USMEPCOM can discuss questions or concerns that could affect all MEPS. Additionally, USMEPCOM has implemented its Peer Review Program where physicians at each MEPS review each other’s medical qualification decisions on a daily basis, if possible, as a local means of quality control. USMEPCOM officials also stated that they can provide feedback if a MEPS chief medical officer contacts USMEPCOM requesting clarification on an EPTS issue. However, these methods of feedback, while useful, do not provide individual feedback to MEPS chief medical officers regarding their specific decisions to medically qualify applicants who ultimately separated from military service within 180 days due to a medical reason. Six of the twelve MEPS medical officers that we contacted stated that they receive very limited to no feedback regarding EPTS cases specific to their MEPS and each said it would be helpful if they did. USMEPCOM officials acknowledge that it has been difficult for them to provide a large amount of feedback to the MEPS because of having to rely on paper medical records from the training bases and because of the technical difficulties they have had in analyzing what records they have received. The USMEPCOM officials believe that if they had an electronic medical record rather than the voluminous paper medical records, it would be easier to analyze information and share the results. Receiving feedback on EPTS separations could allow MEPS chief medical officers to refine and improve their performance during the medical qualification process, thereby disqualifying applicants at the MEPS rather than after the military services have invested significant resources in enlistees at the basic training sites. As previously noted, DOD’s Accession Medical Standards Analysis and Research Activity estimates that the average cost to recruit, screen, and train each enlistee is approximately $75,000. However, until USMEPCOM uses EPTS separation information to provide regular and specific feedback to MEPS chief medical officers, USMEPCOM may not be assured that it is adequately identifying medically disqualifying conditions among applicants for military service before the military services invest substantial resources in the applicants’ initial training. DOD has not implemented its new electronic health record system at the MEPS and its schedule to do so is uncertain. As a result, the MEPS rely largely on self-disclosed medical conditions, history, and records from the applicants to make their medical qualification decisions, and they use a paper-based system for recording and processing applicant medical information. DOD recognizes the need to upgrade the enlistee accession process; however, its schedule for implementing a new electronic health record system to support this process is uncertain. Without an electronic health record system that enables MEPS chief medical officers to electronically obtain the medical history of applicants and document health conditions in an electronic health record, MEPS officials rely on applicant self-disclosure and a paper-based process to evaluate the array of information related to each applicant’s medical history and current condition to determine if an applicant is medically qualified to join the military. At the beginning of the accession process, an applicant must self-disclose personal medical information by answering a medical prescreening questionnaire with over 160 questions covering major body systems along with sleep disorders; learning, psychiatric, and behavioral issues; and medicine usage. When an applicant answers any of these questions affirmatively, they are expected to obtain—or authorize others, such as their recruiter, to obtain—additional documentation regarding that medical condition to include with the medical prescreen questionnaire. While current medical processing provides valuable information, reliance is placed heavily upon applicant self-disclosure of his/her medical history, leaving a potential void in details if the applicant does not disclose any known medical conditions. This creates the possibility that an applicant could conceal a potentially disqualifying medical condition that should be considered during the medical qualification process. According to a DOD review, reliance on applicants’ self-disclosed material limits information for review, constrains analysis and hampers efforts to identify applicants who do not meet standards early in the military recruiting and accession process. As mentioned previously, USMEPCOM analysis shows that, in 2015, about 75 percent of early attrition due to medical reasons within the first 180 days of service was attributed to either applicant concealment of known medical conditions or due to the enlistee being unaware of the medical condition. Even if an applicant self-discloses that they have or previously had a medical condition to either a recruiter or later to MEPS medical physicians and provides their medical records for further review, the MEPS use a largely paper-based documentation system that requires manual processing of the medical information collected on applicants. DOD has noted that enlistment across the military force requires processing 70 to 80 million pieces of paper every year—a slow, duplicative, and expensive process. Throughout the accession process for enlistees, paper is still mailed, faxed, hand-carried, and scanned, often multiple times, to the MEPS for use in processing the applicant for further review. In addition to the applicants’ hard copy medical prescreen questionnaire and any supporting medical documents that are submitted to the MEPS, MEPS medical personnel record on paper forms the additional medical history and physical examination results and comments they obtain during their evaluation of the applicant. Additionally, there may be numerous other forms used during an applicant’s medical processing that are not captured electronically, including authorizations for medical testing, consultation requests and results, and chain of custody documents. Officials at each of the MEPS locations we visited or contacted characterized the volume of paper they deal with on a daily basis as being challenging, overwhelming, an administrative burden, or time- consuming. Further, they said that handling, transferring, and manual processing of the paper records is often done multiple times in order to advance an applicant through the medical qualification process. Figure 8 illustrates examples of the possible paper forms and documentation that may be used for enlisted applicants and accession processing activities. DOD is in the process of implementing a new integrated electronic health record system, but DOD’s schedule for deploying this system at MEPs to assist with the medical screening of enlisted applicants is uncertain. In July 2015, the Program Executive Office, DOD Healthcare Management System, under the authority and direction of the Under Secretary of Defense for Acquisition, Technology, and Logistics, awarded a $4.3 billion contract for a new integrated electronic health record system known as MHS GENESIS. This new system is intended to give DOD the capability to electronically share more complete medical data with and between both federal and private sector medical facilities that are similarly equipped. More specifically, currently USMEPCOM has no electronic interfaces to electronic medical information holders for it to independently obtain medical history information on applicants including information held by other DOD (e.g., Military Health System), government (e.g. Veterans Affairs, Social Security Administration, etc.), and public/private sector (e.g., medical insurance, pharmacy beneficiary, etc.) entities. If implemented within USMEPCOM, this new electronic health record system could provide this electronic interface as well as other capabilities to improve USMEPCOM’s ability to access data and share medical information. A USMEPCOM concept of operations paper discusses how USMEPCOM believes the use of an electronic health record system could reduce both its reliance on applicant self-disclosure and its paper-based process of recording applicant medical information. For example, with MHS GENESIS’ planned interoperability and data exchange capabilities, USMEPCOM officials could reduce their reliance on applicant self- disclosure and improve the medical qualification decision-making process by interfacing with and accessing applicant electronic medical records that may exist to independently obtain and verify applicant medical history information. Additionally, the ability to electronically exchange information would allow USMEPCOM to share the information more quickly with other accession stakeholders like service medical waiver review authorities. Further, a transition to an electronic health record system would begin to reduce the use of the paper-based system—and its associated costs and challenges—for recording the medical information obtained or generated during the accession process. Thus, integration of an electronic health record system into the accession community could enhance DOD’s ability to obtain and document complete, accurate, detailed medical information that is fully accessible, could be used to improve USMEPCOM officials’ medical qualification decisions, and perhaps affect early attrition rates. Recognizing the potential for MHS GENESIS to support the MEPS, in June 2016 the Acting Principal Deputy Under Secretary of Defense for Personnel and Readiness emphasized that the modernization of accession processes is a priority and requested that the MHS GENESIS program management office coordinate with the accession community to include the MEPS in the deployment schedule for the new system. In response to this request, in August 2016 the program management office issued a memorandum stating it fully intends to work with USMEPCOM to ensure MEPS locations are included in the implementation of MHS GENESIS. Subsequently, officials from Accession Policy within the Office of the Under Secretary of Defense for Personnel and Readiness and USMEPCOM stated that they had initial coordination meetings in January and April 2017 with MHS GENESIS program management officials to discuss the inclusion of MEPS locations into the system deployment plans. According to USMEPCOM officials, the latest meeting produced an expectation that MHS GENESIS will meet its initial operating capability at one MEPS location in the fall of 2018. However, according to a MHS GENESIS program management official, detailed plans for deploying MHS GENESIS to the MEPS are in the earliest stages of development and no deployment decisions or timelines have been established. Thus, DOD’s schedule for deploying MHS GENESIS at MEPS locations and ensuring that the system supports its accession programs is uncertain. We have previously reported that projects such as MHS GENESIS can benefit from the effective use of project planning and management practices. These practices can significantly increase the likelihood of delivering promised capabilities on time and within budget. Additionally, we and others have issued guidance calling for the development of essential documentation needed for project planning execution and management. According to this guidance, project planning involves, among other things, establishing a schedule of actions required to attain project objectives. We have also reported that a well-planned schedule is a fundamental management tool that can help government programs use public funds effectively by specifying when work will be performed in the future and measuring program performance against an approved plan. Moreover, an integrated and reliable schedule can show when major events are expected as well as the completion dates for all activities leading up to them, which can help determine if the program’s parameters are realistic and achievable. Further, a reliable schedule can contribute to an understanding of the cost impact if the program does not finish on time. Until DOD completes development of a schedule that includes dates for MHS GENESIS’ deployment to MEPS locations, the department will not have assurance that its efforts to modernize the department’s medical screening process through reducing its reliance on self-disclosure and the processing of paper files is moving forward expeditiously and as planned. The military services enlist thousands of new servicemembers each year, but if incomplete medical information is gathered or if inadequate medical screening is performed, the military services may increase the likelihood that some of these enlistees may leave the military before their initial terms of commitment are fulfilled. Early separation is costly, and enlistee early attrition during their initial term of commitment due to medical reasons—many of which may be either not disclosed or unknown— constitutes a significant loss to the military services. Even when an enlistee separates from military service within the first 180 days due to a medical reason, DOD can use information from those cases to improve its accession medical qualification process; however, DOD does not have a clear process to ensure that complete medical information is available about early separation cases from the military services. Moreover, it has not set a schedule to repair a key database at USMEPCOM to analyze this information. As a result, DOD may not be able to review and analyze information that could help improve the medical qualification decision process and ensure that MEPS are adequately identifying medically disqualifying conditions among applicants for military service. Further, DOD primarily relies on the self-disclosure of medical information by enlisted applicants and a paper-based system to process and obtain medical information for new enlistees. As DOD begins its planning efforts for integrating a new multibillion dollar electronic health record system and transforming the current manual accession medical process to an automated one, it is important that DOD have a clear and complete schedule and plan in place to effectively manage this effort. Without a clear and complete schedule for implementation of its new system, DOD has limited assurance that the system will be support the MEPS as planned. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to take the following three actions: In coordination with the Director, USMEPCOM, and the military services develop a clear process with defined roles and responsibilities to ensure that complete EPTS separation medical records for enlistees who separated within 180 days of service from the military services’ basic training sites are provided to USMEPCOM. In coordination with the Director, USMEPCOM, establish a schedule to repair the internal EPTS database so that USMEPCOM can provide more regular and specific feedback to MEPS chief medical officers. In coordination with the Under Secretary of Defense for Acquisition, Technology, and Logistics and the DOD Healthcare Management Systems Program Executive Office, develop a schedule of actions for deploying its new electronic health record system, MHS GENESIS, within USMEPCOM that includes key activities such as the major actions required to accomplish this effort, completion dates for all actions leading up to these events, and dates for the system’s deployment to MEPS locations. We provided a draft of this report to DOD for review and comment. In written comments, reproduced in appendix III, DOD concurred with two recommendations; partially concurred with one recommendation; and separately provided technical comments, which we incorporated as appropriate. DOD concurred with our first recommendation to develop a clear process with defined roles and responsibilities to ensure that complete EPTS separation medical records are provided to USMEPCOM and described actions that the department plans to take to implement this recommendation. DOD concurred with our second recommendation to establish a schedule to repair the internal database it uses to analyze medical records for EPTS separations so that USMEPCOM can provide more regular and specific feedback to MEPS chief medical officers. In its comments, DOD stated that the database is being reviewed as part of a multi-year information technology modernization effort that includes the use of business intelligence tools found within DOD’s new electronic health record system known as MHS GENESIS. DOD stated that once these tools are available, USMEPCOM will be able to conduct the EPTS medical record reviews and provide detailed feedback to the MEPS chief medical officers. We believe that having access to MHS GENESIS’ business intelligence tools should improve USMEPCOM’s ability to conduct a more thorough analysis of EPTS separation medical records. However, the MHS GENESIS implementation schedule within USMEPCOM has not been finalized and is not expected to be approved until a Full Deployment Decision certification is issued some time in 2018. This means that the phased implementation of MHS GENESIS at the MEPS is likely to be several years away at a minimum. Therefore, we continue to believe that in the interim it would be beneficial for USMEPCOM to establish a schedule specific to repairing its current database that will allow for a more thorough analysis of EPTS separation medical records. DOD partially concurred with our recommendation to develop a schedule for deploying the new electronic health record system, MHS GENESIS, within USMEPCOM. After receiving our draft report, DOD officials expressed concerns regarding the office to which this recommendation was directed. DOD officials stated that since the Under Secretary of Defense for Personnel and Readiness is the functional owner of the new electronic health record system, the recommendation should be directed to that office, instead of to the Under Secretary of Defense for Acquisition, Technology and Logistics. After consideration of this information and a discussion with these officials, we agreed to revise the recommendation to be directed to the Under Secretary of Defense for Personnel and Readiness in conjunction with officials from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. Further, in its comments, DOD stated that it has already taken actions to implement our recommendation, as the Program Executive Office for DOD Healthcare Management Systems has developed proposed schedules for incorporating MHS GENESIS into the MEPS. DOD also stated that because these schedules were unofficial and unapproved, it could not share them with us. We are very concerned with DOD’s statement that the department is unable to share this unapproved information with us. Throughout the audit, we were repeatedly told by the Program Executive Office that details for the MHS GENESIS deployment to MEPS facilities did not exist. When we asked for clarification about whether schedules really did not exist or rather if officials were refusing to provide any existing schedules to us, we received no response. Our access authority under 31 U.S.C. § 716 provides us authority to obtain documents that may be “pre-decisional” in nature. Absent our ability to review these schedules, we continue to believe that our recommendation remains valid and we will continue to monitor DOD’s actions in this area related to the development, approval and implementation of a schedule for deploying the department’s new electronic health record system, MHS GENESIS, within USMEPCOM. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Commander, U.S. Military Entrance Processing Command; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The scope of our review included Department of Defense (DOD) offices involved in the accession, training, or separation of enlisted active-duty servicemembers in the Army, the Air Force, the Navy, and the Marine Corps. Table 1 contains a list of the organizations, offices, and military installations that we visited or contacted during the course of our review. To determine the extent to which servicemembers are unable to complete their initial terms of commitment because of medical reasons, we analyzed data from the Defense Manpower Data Center (DMDC) on accessions and early attrition of active-duty enlistees from the four military services during their first terms of commitment, often between 4 and 6 years of active-duty service, for fiscal years 2005 through 2015. Fiscal year 2015 is the most recent year for which an entire year’s worth of attrition data are available and, for relevancy purposes, we obtained data not more than 10 years old, beginning in fiscal year 2005. We analyzed these data to show overall early attrition and early attrition due to medical reasons over selected intervals by military service for each fiscal year. We also analyzed DOD separation codes assigned to each separation to examine the leading categories of early attrition. We reviewed the data to check for their completeness and for obvious errors such as out-of-range date values. We also interviewed a knowledgeable official from DMDC regarding data quality and reliability. We determined that the data were sufficiently reliable for reporting historical early attrition trends. We also interviewed DOD, USMEPCOM, and military service officials to obtain their perspectives on early attrition rates. To determine the extent to which USMEPCOM obtains, analyzes, and uses information about enlistee early attrition due to medical reasons, we reviewed DOD memorandums and USMEPCOM regulations related to obtaining, analyzing, and using information about enlistee early attrition due to medical reasons. We also compared USMEPCOM practices for obtaining, analyzing, and using information from medical records for enlistees who separated within the first 180 days of their service due to medical conditions that existed prior to their service with the Standards for Internal Control in the Federal Government. This included the importance of designing control activities to achieve objectives and respond to risks and using quality information by identifying information requirements, obtaining relevant data from reliable internal and external sources in a timely manner, and processing the obtained data into quality information. Additionally, we interviewed officials at USMEPCOM and officials from four of the military services’ training bases to further understand the collection and reporting of early medical attrition information. These bases were selected on the basis of geographical dispersion and included one from each of the military services. To determine the extent to which DOD has implemented its new electronic health record system at the MEPS to obtain and document applicants’ medical information, we reviewed selected DOD, USMEPCOM, and military regulations related to applicant medical screening processes. Additionally, we selected a convenience sample of four MEPS that were located in large geographically dispersed U.S. cities that were also near a military service recruiting office and a basic training base to observe medical-related MEPS operations and interview officials. During our visits to the selected MEPS locations, we also interviewed officials from nearby military service recruiting organizations to discuss their perspectives on recruiting potential applicants and the process and challenges associated with medically screening applicants. We supplemented our visits to the large MEPS locations with questionnaires sent to MEPS command officials, chief medical officers, and military service recruiting liaisons of a nongeneralizable selection of eight small or medium-sized MEPS as determined first by workload level, then sorted randomly, and then chosen to ensure distribution across all MEPS battalions. Further, regarding DOD’s efforts to implement an electronic health record into MEPS locations, we interviewed officials from Accession Policy within the Office of the Under Secretary of Defense for Personnel and Readiness and USMEPCOM as well as contacted the Program Executive Office Defense Healthcare Management System to obtain information regarding the implementation status of DOD’s new electronic health record within USMEPCOM at the MEPS locations. We compared their efforts against selected information technology project management practices for developing well-planned schedules. We conducted this performance audit from July 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 2 provides a selected list of DOD and military service instructions, guidance, and policies regarding the medical screening and processing of military applicants. In addition to the contact named above, Kimberly C. Seay (Assistant Director), Vijay Barnabas, Rebecca Beale, Vincent Buquicchio, Cynthia Grant, Mae Jones, Amie Lesser, Josh Ormond, Amber Sinclair, Rachel Stoiko, Wade Tanner, and Sabrina Willard made major contributions to this report. | For fiscal years 2005 through 2015, the military services enlisted over 1.7 million servicemembers at an estimated cost of approximately $75,000 each. Incomplete medical information or inadequate screening of enlistees at MEPS may result in them not fulfilling their initial terms of commitment and the military services losing their investment in them. The House Report accompanying a proposed bill for the Fiscal Year 2017 National Defense Authorization Act included a provision for GAO to review applicant medical screening issues at the MEPS. This report assesses the extent to which (1) enlistees have not completed their initial terms of commitment due to medical reasons; (2) USMEPCOM obtains, analyzes, and uses information about enlistee medical early attrition; and (3) DOD has implemented its new electronic health record system at the MEPS. GAO analyzed accession and attrition data for fiscal years 2005 through 2015 (the most recent available), visited selected MEPS near services' training bases, and reviewed selected DOD, USMEPCOM, and service policies. GAO's analysis of Department of Defense (DOD) accession and attrition data found that early attrition rates due to medical reasons during an enlistee's initial term of commitment were generally stable for fiscal years 2005 through 2015. As shown in the figure, the medical early attrition rate at the 48-month point was an estimated 14.9 percent in fiscal year 2005 and an estimated 13.7 percent in fiscal year 2011—the most recent year for which 48 months of data were available. The leading category for early attrition was “unqualified for active duty, other,” which DOD defines as a nondisability condition such as obesity. U.S. Military Entrance Processing Command (USMEPCOM), DOD's organization responsible for medically qualifying applicants for military service, does not fully obtain, analyze and use information about enlistees who separate early due to medical reasons. This is because DOD does not have a clearly defined process for the military services to provide USMEPCOM with all relevant medical records. Further, the database that USMEPCOM relies on to analyze these records is inoperable and no schedule has been developed to repair it. As a result, USMEPCOM has provided limited feedback to chief medical officers—responsible for the medical qualification decisions—that they could use to improve screening outcomes. Without addressing these issues, DOD has limited assurance that medically disqualifying conditions among new enlistees will be identified before the services invest substantial resources in their initial training. DOD has not implemented its new electronic health record system at the Military Entrance Processing Stations (MEPS) and its schedule to do so is uncertain. Known as MHS GENESIS, this new system is intended to give DOD the capability to electronically share more complete medical data with and between both federal and private sector medical facilities that are similarly equipped. Without a clear and complete schedule for implementation of MHS GENESIS, DOD has limited assurance that the system will support the MEPS as planned. GAO recommends that DOD develop a clear process for USMEPCOM to obtain medical early separation records, a schedule to repair the database used to analyze the records, and a schedule to deploy MHS GENESIS at the MEPS. DOD concurred with the first two recommendations and partially concurred with the third, stating it is already developing such a schedule. GAO continues to believe action is needed as discussed in the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As part of a Navy-wide infrastructure cost reduction initiative, the Navy is restructuring its shore establishment by consolidating installation management functions in areas where significant concentrations of Navy activities exist, such as San Diego, California, Jacksonville, Florida, and—for purposes of this report—the northeastern area of the United States. This initiative seeks to reduce management and support redundancies and duplications of effort and to eliminate unnecessary overhead. In doing so, a single commander is given responsibility for the management and oversight of naval shore installations within a specific geographic region. Other responsibilities will include providing base support services to Navy operating forces and other naval activities and tenant commands, as well as managing the funding associated with these services. According to officials at NSB New London, total base support funding for the Northeast region is estimated to be between $165 million and $185 million in fiscal year 1999. Creation of a separate command to manage and oversee base support functions at Navy shore installations is expected to provide a more dedicated and expanded regionwide focus on those activities in an effort to reduce overhead costs and achieve increased efficiencies totaling millions of dollars. The establishment of the Northeast command will complete a total of 13 regional naval coordinators worldwide. In recommending the establishment of the new command, CINCLANTFLT is seeking to relieve the Commander, Submarine Group Two, an operational commander at NSB New London, of the nonoperational duties associated with the regional coordinator role. Establishing a separate command headed by a flag rank officer (admiral) to oversee northeastern shore installations would be consistent with other CINCLANTFLT regional commands that exist in Norfolk, Virginia, and Jacksonville, Florida. According to Navy officials, these regional commands will support Navy efforts to eliminate redundant management structures, reduce infrastructure costs, and foster regional service delivery of installation management support. CINCLANTFLT officials estimated that the staff of the command would consist of a flag rank commanding officer, 27 other military personnel, and 27 civilian employees. The existing regional coordination staff at NSB New London consists of 9 military and 15 civilian personnel. CINCLANTFLT’s recommendation to establish the new command at NWS Earle is pending approval by the Chief of Naval Operations and the Secretary of the Navy. In reviewing CINCLANTFLT’s recommendation of NWS Earle for the new command headquarters, we could not be certain to what extent the Navy had fully considered its stated criteria to evaluate or compare alternate sites because documentation to support the Navy’s decision was limited. Additionally, costs associated with relocating regional coordination functions and staff from NSB New London to NWS Earle and operating from that site may be greater than those estimated by the Navy. Navy Instruction 5450.169D, regarding the establishment, disestablishment, or modification of Navy shore activities, states that several factors should be considered, including whether (1) an activity is currently performing the mission or an existing activity in the same geographical area can assume the mission, (2) an existing activity of the same type can perform the mission, and (3) the need for the activity is sufficient to offset the cost of establishing a separate activity. Additionally, between October 1997 and March 1998, the Navy stated in correspondence with senators and congressmen from Connecticut and Rhode Island that several factors were being considered in selecting a location for the command. These factors included the availability of office space, communications, and suitable family housing; proximity to the regional offices of other federal government agencies; access to transportation; operational and military support; relocation and alteration costs; and rent costs. Navy officials told us that they considered the criteria stated in the Navy instruction and in their congressional correspondence in evaluating and comparing alternate sites. However, we are concerned as to the extent of this analysis. While Navy guidance does not specifically direct the preparation of cost comparisons for prospective sites, it does suggest that the Navy seek economy and efficiency in establishing new activities, which would suggest the need to compare costs among prospective sites. CINCLANTFLT officials told us that the site selection process began with their gathering some estimated cost data for prospective sites with the intent of performing a cost comparison. However, they were informed early in the process that CINCLANTFLT had already decided to locate the new command at NWS Earle because that was the desired location. Consequently, according to these officials, no further data were developed to estimate and compare the costs associated with establishing the command at sites other than NWS Earle. Our review of available documentation and discussions with Navy officials indicate that CINCLANTFLT’s recommendation to establish the Commander, Navy Region Northeast, at NWS Earle was based primarily on placing the command in closer proximity to New York City. CINCLANTFLT’s decision paper, referred to as a Fact and Justification Sheet, cited a number of needs and benefits of such a placement, focusing primarily on the need for Navy flag rank representation in the New York-New Jersey area. Specifically, the justification highlighted activities such as the importance of acting as the resident Navy spokesperson; interacting on the Navy’s behalf with major corporations, labor unions, and other organizations associated with maritime commerce; and serving as the Navy’s official representative for major events such as visiting foreign dignitaries. CINCLANTFLT did perform analyses sufficient to estimate the cost to establish the command at NWS Earle at $1.89 million. We did not, however, independently verify these cost estimates. CINCLANTFLT’s analyses included cost estimates for renovation of flag and officer office space, displacement of the current occupants of this office space; moving office furniture, supplies, and equipment; civilian and military permanent change of station costs; civilian severance pay for those who do not relocate; and a recurring increase in travel expenses due to the location of NWS Earle in relation to its subordinate commands (see table 1). Detailed cost estimates to establish the command were not documented for other potential sites. CINCLANTFLT’s Fact and Justification Sheet acknowledges that no monetary or manpower savings have been identified with relocating the Commander, Navy Region Northeast, to NWS Earle. Our analysis shows potential for the Navy’s one-time cost estimates to be understated. For example: CINCLANTFLT officials estimated it would cost approximately $75,000 to renovate office space to accommodate the commander and his/her staff. However, officials at NWS Earle stated that this renovation cost estimate could increase to as much as $130,000 if the decision were made to install central versus window air conditioning. While CINCLANTFLT estimated that travel expenses would increase by about $75,000 per year for travel to other subordinate commands, other information indicates this estimate may be understated. Officials at NSB New London, where the core staff for the new command are currently stationed, provided their analysis that suggested that these costs could increase by about $100,000 to $200,000 annually. We did not independently verify this analysis. However, establishing the command at NWS Earle will result in the command being located in the southern most area of the region, making it relatively less accessible to other installations in the region than from its current location at NSB New London or from Newport. For example, travel from NWS Earle to other areas of the region would require greater use of air travel than from NSB New London or Newport where cars and car pools are more readily used to reach other facilities. Figure 1 shows the approximate locations of Navy concentration areas in the northeast region. CINCLANTFLT’s Fact and Justification Sheet also does not reflect cost estimates for renovating the on-base housing at NWS Earle to accommodate the flag officer. According to NWS Earle officials, it would cost at least $20,000 to renovate the proposed admiral’s quarters to meet the Navy housing standards for flag officer quarters if the admiral chose to live on base. The Navy’s cost estimates do not include the civilian personnel payroll increase that will occur as a result of this move. Due to the location of NWS Earle, each civilian employee would be entitled to a salary increase to reflect the locality pay for that area. Based on the U.S. Office of Personnel Management 1998 General Schedule, locality pay rates are 9.76 percent and 9.13 percent, for NWS Earle and NSB New London, respectively. Locality pay rates for Newport are 5.4 percent by comparison. In examining mission and support requirements of the new command, we found that the NWS Earle location raises two basic operational limitations when compared to the current location at NSB New London or the facilities at Newport. These limitations relate to increased travel time and costs associated with operating from that location and the adequacy of existing facility infrastructure to support the new headquarters relative to at least the NSB New London and Newport locations. According to CINCLANTFLT’s Fact and Justification Sheet, the proposed mission of the Commander, Navy Region Northeast, would primarily involve management and oversight of the widely dispersed naval shore activities in the northeast region. CINCLANTFLT officials expect that travel expenses would increase over what they would be in a more central location. According to NSB New London officials, the mission requires frequent travel to and from the naval activities within the region (see fig. 1 and app. I). Because NWS Earle is located in the southern most part of the northeast region, these officials stated that there would likely be a greater reliance on travel by air than by car where several persons could travel together at less cost. Our review of factors such as office space, housing, and conference/training facilities at the sites we visited shows that NWS Earle has the least existing infrastructure to support the new command’s requirements. We observed that the available infrastructure at NWS Earle is primarily suited to support its mission of receiving, storing, and distributing naval ordnance and has limited office, conference, and classroom space. As stated previously, placing the new command at NWS Earle would require the displacing and relocating of existing command staff and renovating of other space to accommodate their relocation. Conversely, at NSB New London, the Navy would not incur any major renovation costs beyond the purchase and installation of additional office modular furniture to accommodate the increased number of staff. We observed that the current headquarters building for the regional coordinator staff at NSB New London has sufficient vacant space on the first and third floors to accommodate the proposed expansion. Even if the Navy decides that the Commander, Navy Region Northeast, and the Commander, Submarine Group Two, would not occupy the same building, officials at NSB New London identified four other buildings on base that could accommodate the Commander, Navy Region Northeast. We also found that the Navy facilities and infrastructure at Newport would be adequate to support the command without major renovation costs. Additionally, NWS Earle does not have sufficient officer housing quarters available to accommodate an admiral and additional staff officers. The proposed staffing of the new command includes 17 officers, including the commanding officer, whereas the on-base family housing at NWS Earle includes 38 officer housing units of which only 2 were vacant as of August 1998 because they were being renovated. Furthermore, according to officials at NWS Earle, none of these officer housing units meets the standards for a flag officer. Although renovations could be made to improve some officer housing units, officials at NWS Earle stated that it is more likely the admiral and his senior staff would choose to reside in quarters available to them at the Fort Monmouth Army Base, about 6 miles away. This latter option is already the housing of choice for some command staff officers currently stationed at NWS Earle. Conversely, at both NSB New London and Newport, there is sufficient housing space to accommodate the proposed command’s military staff. We observed that both of these bases have housing areas with sufficient space to accommodate both the numbers and grade levels of the command’s military staff. As part of the regional coordination mission involving management and oversight of naval shore activities in the region, the command hosts frequent conferences and training seminars for personnel from other naval installations throughout the region. For example, during fiscal year 1998, about 20 to 50 personnel at a time attended training courses and conferences at NSB New London that related to regional activities such as the Navy’s commercial activities program, casualty assistance calls, information technology, facilities engineering, family advocacy and family services, and regional security. Officials at NWS Earle stated that the command building there would not include adequate conference and training facilities to accommodate these activities. We observed, for example, that the current command building at NWS Earle that would be used to house the new command has one conference room, which has sufficient space for a maximum of about 15 to 20 participants. Conversely, we observed that the facilities occupied by the regional coordinator staff at NSB New London currently have several large conference rooms and several other smaller meeting facilities that are sufficient to accommodate expanded requirements. Similarly, we observed that the building at Newport that would be used for the new regional command has sufficient conference and meeting rooms to accommodate the command’s anticipated requirements. While the CINCLANTFLT justification was based primarily on NWS Earle’s proximity to New York City, the desire for a flag rank officer at that location, and several other public relations-related factors, the high priority given to these criteria appears questionable when compared to the command’s core mission responsibilities. CINCLANTFLT’s Fact and Justification Sheet states that (1) NWS Earle is the only primary homeport for Navy ships on the East Coast without a flag officer and (2) there is a need for Navy flag officer representation in the New York-New Jersey area to act as the resident Navy spokesperson and to interact on the Navy’s behalf with major corporations, labor unions, other organizations associated with maritime commerce, and publishing and media concerns. It also states that the regional commander would serve as the official Navy representative for major events, visiting foreign dignitaries, and U.S. Navy and foreign ship port visits. The regional commander would serve on numerous area special purpose councils and respond to requirements for support functions and services in the New York City area arising from the large population and the Navy’s recruiting efforts in the area. Furthermore, the justification sheet states that there is a requirement for essential support functions and services such as major casualty assistance calls programs, extensive regional public affairs information services, and a large community service program in the New York-New Jersey area. While each of the justification points highlighted in the justification sheet has merit, available data indicate that these functions differ significantly from the command’s core responsibilities. These core responsibilities are more related to managing installation support services at the Navy’s bases and commands in the region and other important functions highlighted in the command’s draft Mission, Functions and Tasks Statement, such as providing primary resource support, management control, and technical support of assigned shore activities. In addition, according to regional coordination officials at NSB New London, flag presence has been required in the New York City area only on an average of about once every 2 months. CINCLANTFLT officials stated that flag presence has been requested in the New York City area more often, but they were unable to provide documentation to quantify their position. Nevertheless, in terms of increased proximity to New York City, NWS Earle is approximately 1-1/2 hours away by automobile. NSB New London is about 2 hours from New York City by automobile and is more centrally located in the northeast region. Therefore, it is not clear that NWS Earle provides a geographic advantage over other locations. Officials at NSB New London stated that they are performing many of the functions proposed for the new command. In this regard, CINCLANTFLT officially designated the Commander, Submarine Group Two, at NSB New London as the Naval Northeast Regional Coordinator in 1994. Some of the regional functions that NSB New London staff have been performing consist of facilities management, regional environmental coordination, disaster preparedness, casualty assistance coordination, family advocacy programs, regional security, and coordination of regional port visits. Additionally, NSB New London staff have recently begun a number of regional projects, including public affairs office consolidation; housing studies; supply coalition; and a Joint Inter-service Regional Support Group, which encompasses support for military facilities in Connecticut, Rhode Island, and Massachusetts. The establishment of a separate Commander, Navy Region Northeast, will also expand the responsibilities of the regional coordinator to include, for example, managing the funds for the base operations support functions at the naval shore installations in the region. As previously noted, while the Navy has emphasized the establishment of a new command to oversee base support operations in the region, officials at NSB New London stated that they are currently responsible for many of the functions proposed for the new command. According to these officials, moving the command to NWS Earle could temporarily disrupt the core base operations functions already established if, as these officials suggest, many of the current employees choose not to relocate to NWS Earle. Moreover, we noted that by moving the new command away from NSB New London, the Navy would be separating the command from other regional activities currently located at NSB New London, including the Regional Supply Coalition and the Regional Emergency Command Center. We recognize that site selection decisions are ultimately a management prerogative based upon weighing relevant factors. At the same time, where policy guidance or other stipulated criteria are established to facilitate decision-making, we believe it is important for decisionmakers to ensure that such guidance and criteria are followed and documented to support the basis for their decisions. It is not clear, however, to what extent CINCLANTFLT’s site selection process was conducted in accordance with Navy guidance and other stipulated criteria regarding the current site selection recommendation. Further, the justification cited for recommending NWS Earle over the current location at NSB New London, or other locations, appears to have a number of weaknesses in the cost estimates that were made and consideration of nonmonetary benefits such as infrastructure deficiencies at NWS Earle and command travel time gains. We recommend that the Secretary of Defense require the Secretary of the Navy to review and more fully assess the prospective headquarters location for the Commander, Navy Region Northeast, against the Navy’s decision-making criteria, taking into consideration issues and questions raised in this report. In written comments on a draft of this report, the Navy concurred with our recommendation and stated that it will review and reconsider all pertinent facts, including the issues and questions raised in this report, and that CINCLANTFLT will then resubmit a fact and justification package on the establishment of a Northeast Region Commander. The Navy also stated that, CINCLANTFLT did follow its published guidance on establishment of shore activities. It also noted that, although cost is an important consideration, it is not the only factor evaluated in the decision-making process. We agree that cost is not the only factor. Our review of available documentation and discussions with Navy officials have indicated that the recommendation to select NWS Earle was based primarily on placing the command in closer proximity to New York City. Less attention was given to other fundamental factors such as operational effectiveness, costs, and core mission responsibilities. Our draft report raised questions about the extent to which the Navy had followed its own criteria regarding the establishment of shore activities since we could not be certain to what extent the Navy met its stipulated requirements because the Navy had limited documentation to support its analyses. We modified our report to clarify this issue. The full text of the Navy’s comments from the Office of the Chief of Naval Operations is presented in appendix II. To assess the process the Navy used for recommending a site for the Commander, Navy Region Northeast, we reviewed available cost estimate data gathered by staff within the office of the CINCLANTFLT. We did not, however, independently verify the Navy’s cost estimates. We also reviewed and analyzed CINCLANTFLT’s (1) Fact and Justification Sheet for the recommendation that the command relocate to NWS Earle, New Jersey; (2) facilities data gathered during the decision-making process; (3) Navy Instruction 5450.169D regarding the establishment of shore activities; (4) Instruction 5450.94 regarding the proposed mission, functions, and tasks statement for the Commander, Navy Region Northeast; and (5) other related documentation. We visited and interviewed officials at the Commander, Submarine Group Two, at the NSB New London in Groton, Connecticut, who are currently responsible for regional coordination among CINCLANTFLT activities in the northeast region. We compared the current mission and staffing of the regional coordination office to the proposed mission, functions, and tasks statement for the Commander, Navy Region Northeast. We discussed with these officials the facilities, infrastructure, and base support available to accommodate the new command. We also visited and interviewed officials at NWS Earle, New Jersey, and the naval base at Newport, Rhode Island, to determine how the command would be accommodated if relocated to these locations. We selected these bases for our review because NWS Earle is the base that CINCLANTFLT has recommended as the site for the Commander, Navy Region Northeast, and the naval facilities at Newport are centrally located within the northeast region. We discussed with these officials the facilities, infrastructure, and base support available to accommodate the new command. We met with senior CINCLANTFLT officials on several occasions to brief them on the results of our work. We have incorporated their comments, as appropriate, to enhance the technical accuracy and completeness of our report. We conducted our review from April to August 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committees on Armed Services and on Appropriations and the House Committees on National Security and on Appropriations; the Director, Office of Management and Budget; and the Secretaries of Defense and the Navy. Copies will also be made available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. David A. Schmitt, Evaluator-in-Charge John R. Beauchamp, Evaluator Patricia F. Blowe, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) the Navy's site selection process for its northeast regional command; (2) the extent to which the Navy fully evaluated the costs and implications of establishing a new command at the Naval Weapons Station (NWS) Earle, New Jersey, versus the Naval Submarine Base (NSB) New London, Connecticut or the Naval Undersea Warfare Center located at Newport, Rhode Island; (3) the extent to which the Navy followed its criteria for establishing shore activities and the extent to which it fully analyzed prospective costs of the three sites; and (4) location and infrastructure factors that would affect costs and operations of the new command at each of the three locations. GAO noted that: (1) weaknesses exist in the Navy's process for selecting the location for the headquarters for its new northeast regional command; (2) in selecting NWS Earle, it is not clear to what extent the Navy followed its own criteria for the establishment, disestablishment, or modification of shore activities or fully assessed the comparative costs of establishing and operating the new headquarters at all sites it had indicated were under consideration; (3) the costs to establish the command at NWS Earle may be greater than the Navy estimated; (4) the NWS Earle site has some basic operational limitations compared with at least two other sites, including NSB New London and Newport; (5) these limitations relate to facilities' infrastructure to support the new command and increased travel time and costs associated with operating from NWS Earle; (6) the Navy stated that it needs a flag rank command closer to New York City to attain certain operational benefits; and (7) while this need may be appropriate, questions exist about: (a) how often the need to visit New York City arises; (b) whether the NWS Earle location provides a significant reduction in travel time compared with travel from the current location at NSB New London; and (c) whether it is desirable to separate the new command from other centralized support activities located at NSB New London. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 1993, GSA has built 79 new courthouses that replaced or supplemented 66 old courthouses. GSA considers 15 of the new courthouses to be annexes—additions that are often larger than the old courthouses. The judiciary identifies potential courthouse projects based on a capital-planning process that involves consultation with GSA. GSA is responsible for reviewing these projects and completing a feasibility study to further analyze and determine the best option, which may differ from the judiciary’s preferred option, before forwarding projects it approves to the Office of Management and Budget (OMB). If approved by OMB, GSA is responsible for submitting requests to congressional authorizing committees in the form of detailed descriptions or prospectuses, hereafter referred to as “new courthouse proposals.” Following congressional authorization and the appropriation of funds for the projects, GSA manages the site, design, and construction phases. After the tenants occupy the space, GSA charges federal tenants, such as the judiciary, rent for the space they occupy and for their respective share of common areas. In fiscal year 2012, the judiciary’s rent payments to GSA totaled over $1 billion for approximately 42.4 million square feet of space in 779 buildings that include 446 federal courthouses. When new courthouses are built, GSA sometimes retains the buildings for occupancy by the judiciary or other federal tenants or GSA, which has custody and control of the federally-owned buildings, disposes of them as surplus real property. GSA works with the Administrative Office of the U.S. Courts (AOUSC), the judiciary’s administrative office, in addressing courthouse space needs and issues. The rent that federal tenants pay GSA is deposited into the Federal Buildings Fund, a revolving fund that GSA uses to finance the operating and capital costs associated with federal space such as repairs and alterations, new construction, and operations and maintenance. When the costs of a project’s capital improvements exceed a specific threshold, currently set at $2.79 million, GSA must submit a prospectus to certain congressional committees for approval prior to the appropriation of funds to meet repair or new construction needs. The judiciary leases more space in federally-owned buildings than any executive or legislative branch agency. Judiciary components housed in courthouses may include a U.S. court of appeals (court of appeals judges, senior circuit judges, and staff); U.S. district court (district judges, magistrate judges, and clerk’s office staff); U.S. bankruptcy court (judges and clerk’s office staff); probation and pretrial services staff; or the office of the federal public defender. In addition to these judicial components, certain executive branch agencies integrally involved with U.S. court activities often lease space in federal courthouses, including the Department of Justice’s U.S. Marshals Service, U.S. Attorney’s Office, and the Office of the U.S. Trustee. In some cases, GSA also leases space for the judiciary in private office buildings. The district courts are the trial courts of the federal court system and occupy the most judiciary space. There are 94 federal judicial districts—at least one for each state, the District of Columbia, and four U.S. territories—organized into 12 regional circuits. Each circuit has a court of appeals whose jurisdiction includes appeals from the district and bankruptcy courts located within the circuit, as well as appeals from decisions of federal administrative agencies. When a new courthouse is built, GSA—rather than the judiciary—decides whether to retain the old courthouse and, when the building is retained, determines how it should be reused. In determining whether to retain an old courthouse, GSA considers the building’s condition; its historic or architectural significance; the judiciary’s interest in occupying the building; local market conditions, such as prevailing lease rates for commercial space; and the existing and projected base of other prospective federal tenants within the area. According to GSA, if the agency determines that the government no longer needs the building, the agency generally uses the Federal Property and Administrative Services Act of 1949, as amended, (Property Act) to dispose of it, following the process shown in figure 1. In addition, GSA has other authorities to dispose of old courthouses, such as the Public Buildings Act of 1959, as amended, which follow a different process. As shown in figure 1, GSA may dispose of federal real property through public benefit conveyances (PBC) to state or local governments and certain nonprofits for approved public benefit uses or negotiated sale to state and local government entities, but not before screening the property for use by other federal agencies and homeless service providers if the Department of Housing and Urban Development (HUD) has determined the property suitable for homeless assistance. If no interest is received from eligible public or nonprofit entities, the agency concludes that there is no public benefit use for the property and proceeds with plans to market it for competitive public sale. Forty of the 66 old courthouses replaced or supplemented by new courthouses since 1993 were retained for reuse by the government. GSA disposed of most of the remaining old courthouses through PBCs or sales to state and local governments, eligible nonprofits, or private sector entities. As figure 2 illustrates, of the 40 retained old courthouses, 36 were being reused by the judiciary and other federal tenants; 3 were vacant; and 1 was largely closed for a major renovation. Appendix II contains detailed information about the 66 courthouses, including their status, disposal method, proceeds, and current uses or major tenants. Among the retained and reused old courthouses, the judiciary had the largest share of space in 25, some space in 5, and occupied no space in the other 6. The various judiciary tenants are sometimes co-located within the same old courthouse. As of June 2013, of the retained old courthouses, the U.S. district courts occupied space in 15 and the U.S. bankruptcy courts occupied space in 19. The U.S. courts of appeals were using five old courthouses to hear cases. In addition, the judiciary and other federal agencies are sometimes co-located in the same old courthouse. Executive branch agencies, particularly the U.S. Department of Justice, had the largest share of space in 11 of the retained and reused old courthouses. Among the retained old courthouses we reviewed, excluding one building that was under major renovation, about 14 percent of the total space (nearly 1-million square feet) in them was vacant as of May 2013— significantly higher than the 4.8 percent overall vacant space among federally-owned buildings in 2012. Eleven of these old courthouses were more than 25 percent vacant, including three (Miami, FL; Buffalo, NY; and Austin, TX) that were completely vacant. GSA officials told us that replacing tenants in old courthouses can be challenging due to the buildings’ condition and needed renovations, among other reasons, as will be discussed later in this report. For example, the old courthouse in Miami has been vacant since 2008 because, according to GSA, of the high costs to renovate it for reuse. The agency plans to either dispose of the building or lease it to nonfederal tenants. The old courthouse in Buffalo has been vacant since 2011 because, according to GSA, it requires renovations that were not requested in the new courthouse proposal, as will be discussed in further detail later in this report. According to GSA officials, the old courthouse in Austin was vacated in December 2012 and GSA is working with the judiciary on a possible plan to reuse the building for the U.S. Court of Appeals for the Fifth Circuit. Forty-one percent of the old courthouses were classified by GSA in 2013 as “nonperforming”—i.e., buildings that do not cover their operating expenses and require moderate to significant renovation. In 2012, 30 percent of all federally-owned buildings were classified by the agency as “nonperforming.” Among other factors, GSA considers net operating income in classifying buildings as “nonperforming.” Buildings with net- operating-income losses drain the Federal Buildings Fund, which is GSA’s main source of funding used to maintain, operate, and improve federally-owned buildings. Therefore, old courthouses with net-operating- income losses subtract financial resources that could have been used for other buildings and projects. According to GSA, the financial performance of old courthouses may be affected by vacancy rates, low local commercial-market rental rates on which GSA bases its rates for federal tenants, as well as operational and administrative expenses associated with the buildings. Of the 41 federally-owned old courthouses, 31 (76 percent) had positive net operating income, totaling about $119 million in fiscal year 2012. The remaining 10 old courthouses had net-operating- income losses totaling about $13 million, most of which was the result of an approximately $9-million loss at the old courthouse in New York City (the Thurgood Marshall U.S. Courthouse) that was wholly vacant for major renovations, followed by the vacant old courthouses in Miami and Buffalo, which had net-operating-income losses of about $1 million and $695,000, respectively. According to GSA officials, the agency focuses on “nonperforming” buildings by developing strategies to address their problems, such as high rates of vacancy. Many old courthouses cannot be easily reconfigured to meet current federal space needs, and thus replacing previous tenants can prove difficult. For example, current U.S. courthouse security standards require the construction of three separate circulation patterns for judges, prisoners, and the public, which many old courthouses do not have. In addition, because of their age, old courthouses often need costly renovations, new mechanical systems, and other improvements before they are suitable for reuse. Further, because many of old courthouses have official historic status—that is, they are listed on the National Register of Historic Places or are eligible for listing—renovations by federal agencies to reuse these buildings for modern office space, for example, must follow the requirements of the National Historic Preservation Act of 1966, as amended, including, among others, compliance with Department of the Interior’s historic preservation standards. Although an Executive Order and GSA regulations encourage executive branch agencies to seek federally-controlled space, especially in centrally located historic buildings, we found several cases in which GSA faced challenges replacing tenants in old courthouses. For example, according to GSA officials, it took more than 10 years to fill the old courthouse in Sacramento, California, after the judiciary moved to a new courthouse in 1999. The officials said that although the old courthouse is centrally located adjacent to California state office buildings in the downtown area, the building needed renovations before it could be reused by other federal tenants, and that limited parking made it difficult to attract new tenants. In Portland, Oregon, GSA officials told us that the 66-year-old courthouse was not extensively renovated after the judiciary moved to the new courthouse in 1998 and, as a result, some space remains vacant, including three former district courtrooms. Other old courthouses have experienced greater difficulty in attracting new tenants after the judiciary moved out. The old courthouse in Reno, Nevada, for example, remained nearly half vacant for almost 20 years after the U.S. district court moved to the new courthouse in 1996 and the bankruptcy court remained. GSA officials also said that it takes time––sometimes years––for federal agencies’ leases in commercial buildings to expire before they could re- locate to federally-owned space. Moreover, in some cases, we found that GSA has been unable to find enough tenants to justify retaining buildings. For example, in Springfield, Massachusetts, although GSA initially planned to retain the old courthouse, the agency decided to dispose of it after the federal tenants that were expected to occupy the building changed their plans, which GSA determined would have resulted in a long-term vacancy rate of at least 40 percent. Even when federal tenants, such as the judiciary, rent space in old courthouses, the space may sometimes be un-used or underutilized. In Trenton, New Jersey, for example, we found that the judiciary paid rent to GSA for space that, according to the judiciary, was used as a district courtroom until the construction of the new courthouse in 1994. However, the judiciary only released this space to GSA in 2012. In addition, the judiciary was also paying rent for 3 other courtrooms in the same building but using them as office and meeting space instead of using them to conduct trials (fig. 3). According to the judiciary, these rooms were not used as courtrooms because, among other reasons, they did not meet modern security standards. We have previously raised concerns about the amount of space that the judiciary occupies. The judiciary plans to return 2 of the 3 courtrooms to GSA in October 2013. Similarly, in Camden, New Jersey, we found that the judiciary paid rent to GSA since 1994 for courtroom space that has not been built, underscoring the importance of effective space planning in new and old courthouses to reduce the government’s real property costs. The judiciary plans to return that space to GSA in October 2013. We also found the judiciary’s space planning in old courthouses may need to further consider changes in technology and trends in court operations. For example, in Camden, a U.S. bankruptcy court official told us that the need for file space had been reduced with growth in electronic filing. In Richmond, Virginia, judiciary officials told us that the use of law libraries has decreased with the growing popularity of online legal research. In addition, we found that the old courthouse in Richmond was mostly retained for use by the U.S. courts of appeals even though it is unclear whether the caseload at this location justified that amount of space. Specifically, an appellate judge in Richmond told us that the court has reduced how often it uses the courthouse for oral arguments (4-day periods known as “court weeks”) from eight times per year to six times per year due to improvements in efficiency. We found the appeals court in Richmond generally uses its six courtrooms simultaneously about 16 percent of the time. We have previously noted that older courthouses are suitable for use by the U.S. courts of appeals, given their comparatively lower security needs. Judiciary officials added that reuse of old courthouses with historic features, such as in Richmond, is an ideal arrangement given limited opportunities for other reuses. However, we have previously raised concerns about the lack of space allocation criteria for the U.S. courts of appeals and will review space utilization by the appeals court in a future study. The difference between the U.S. Courts Design Guide (Design Guide) baseline for libraries in new courthouses, about 9,200 square feet, and the existing library space in Richmond, about 17,000 square feet, raises questions about whether the entire space is needed. The Design Guide does not specify how much space should be allotted for judiciary functions, such as clerk space, libraries, and courtrooms, in existing buildings. According to judiciary officials, space configurations in existing buildings make them difficult to retrofit consistent with current design standards. As a result, the judiciary does not apply its space planning tool, which uses Design Guide specifications, for space planning in old courthouses. However, AOUSC officials told us that the judiciary’s annual $1-billion rent costs are unsustainable and that they are developing a program, called Right Fit, to examine opportunities to reduce the amount of space leased from GSA. GSA disposed of 25 of the 66 old courthouses we reviewed by PBCs, sales, or exchanges. As shown in figure 4, GSA disposed of most courthouses through sales or exchanges (65 percent) followed by PBCs (31 percent). Of the 17 old courthouses that GSA sold or exchanged, 14 were sold and 3 were exchanged for land used for the new courthouse. From buildings GSA sold, it realized a total of about $20 million in proceeds. Sales prices for these buildings ranged from $200,000 for the old courthouse in Greeneville, Tennessee, to $5.4 million for the old courthouse in Minneapolis, Minnesota, with an average sale price of $1.5 million. Purchasers of the 14 old courthouses disposed by sale included state and local governments as well as private sector entities. Most old courthouses disposed by PBC were disposed using historic monument conveyances, which, according to GSA, provide the greatest flexibility with regard to the future use of the building. Specifically, as long as the buildings’ historic features are preserved in accordance with the Secretary of the Interior’s Standards for Rehabilitation, the recipients may develop plans for a wide variety of uses. For example, old courthouses that were disposed of by historic monument conveyance included buildings that are being reused or plan to be used as affordable housing, a hotel, and for state and local government functions. Other old courthouses that were disposed of by PBCs were being used for educational and for criminal justice purposes such as juvenile justice centers. Although PBCs do not typically generate any financial proceeds for the federal government, the public continues to realize a benefit from the buildings because they are conveyed with deed restrictions that ensure the building will be used for the approved public benefit purpose. According to GSA, cost savings and cost avoidance is often realized with the disposal of unused or underutilized property, including properties disposed of via PBC. We found that it took GSA an average of 525 days, or 1.4 years, to dispose of the old courthouses we reviewed. This excludes any time the buildings may have been vacant—sometimes years—before they were declared excess, the point at which GSA decides to dispose of them. GSA officials said that the disposal process can be lengthy because old courthouses often (1) have a high level of congressional and public interest that can generate competing inquiries regarding their future use; (2) are historic and, thus, subject to lengthy reviews; and (3) have specialized designs that make the buildings difficult to reuse for other purposes. We attempted to compare how long it took GSA to dispose of the old courthouses with the length of time it took to dispose of all types of properties from its nationwide portfolio of federally-owned buildings, but were unable to make such a comparison due to data reliability issues. GSA’s data on disposal times for all properties in its portfolio included “holds” when the disposal times were suspended to account for situations that GSA deemed to be out of its control, such as pending legislation, litigation, environmental concerns, and historic preservation reviews. To attempt to review comparable data, we asked GSA to provide information on the length of holds and the reasons for the holds that were placed on the disposal times for old courthouses in our review. However, the data were incomplete, and in some cases, the explanations for the holds did not fall within the categories that GSA defined as being out of its control. As noted above, we found that during the disposal process, buildings might remain vacant for extended periods of time and, thus, do not earn any revenue to help offset the costs to operate mechanical systems and perform maintenance to help prevent deterioration, for example: Hammond, Indiana: Disposal of the old courthouse took 2 years. According to GSA, after the judiciary vacated the old courthouse in 2002, GSA and the City of Hammond conducted extensive discussions regarding the possibility of exchanging the building for city-owned land that could be used for parking space at the new courthouse. Those negotiations, however, were unsuccessful. After remaining vacant for 6 years, in 2006, GSA declared the property excess at which point the City of Hammond expressed interest in purchasing it through a negotiated sale. When the city subsequently decided it did not want the building, GSA decided against holding a public auction, given the minimal demand for property in the local market. In 2009, GSA sold the building through a negotiated sale to the First Baptist Church of Hammond, which owned a complex of buildings near the old courthouse, for $550,000. Kansas City, Missouri: Disposal of the old courthouse took 3 years. After the judiciary vacated the old courthouse in 1998, GSA initially tried to retain the building, which opened in 1939, because of its historical significance and location, but after studying alternative uses, found that it was impractical for the federal government to reinvest in the building. In 2003, GSA retained a private development company to explore and promote public uses of the building by local governments and institutions, but none demonstrated the ability to make full use of the old courthouse. As a result, after remaining vacant for 7 years, in 2005, GSA determined that the building was excess property and it was screened for PBC use. Subsequently, the City of Kansas City applied for a historic monument PBC and the building was conveyed to the city in 2008. According to GSA officials, the length of time between the report of excess by GSA and its conveyance to the city was partly caused by the complexity of the project’s financing. Nearly half of the 25 disposed old courthouses were historic buildings and listed on the National Register of Historic Places. According to GSA, any actions, including disposal, of buildings that are listed or eligible for listing on the National Register of Historic places, as required by National Historic Preservation Act, may be very lengthy. Disposing of old courthouses also involves the screening of properties for potential use by organizations for the homeless. More specifically, under the McKinney- Vento Homeless Assistance (McKinney-Vento) Act, as amended, HUD is to solicit information on a quarterly basis from federal landholding agencies regarding federal buildings that are excess property, surplus property, or that are described as unutilized or underutilized in surveys by the heads of landholding agencies. HUD is then to identify and publish in the Federal Register those buildings that are suitable for use to assist the homeless. The Secretary of Health and Human Services is then to evaluate applications by representatives of the homeless for the use of such properties. In general, upon an approved application, such property is to be disposed of with priority consideration of surplus property given to potential uses to assist the homeless. Although excess real property, including old courthouses, must be reported to and evaluated by HUD for suitability for homeless use, we found no instances in which old courthouses were conveyed for this purpose. We found only one case, in Coeur d’Alene, Idaho, in which a homeless services provider’s application to use the old courthouse was approved, but the organization subsequently withdrew its application after determining that the expense of renovating, maintaining, and operating the building would have been too costly. The old courthouse in Coeur d’Alene was eventually conveyed to the county government under a historic monument PBC and is now used as a juvenile justice facility. In 1997 and 2000, GSA disposed of two old courthouses (St. Louis and Ft. Myers) using an authority known as the “35 Act.” The agency interpreted the “35 Act” as not being subject to the surveying and reporting requirements of the Property Act and, in turn, the homeless-assistance screening process established by the McKinney-Vento Act because according to the GSA, the “35 Act” did not require GSA to declare a property as excess or surplus. As a result, GSA allowed the properties to be sold to local governments without reporting such property to HUD and did not undergo the homeless-assistance screening process. However, an April 2000 federal district court opinion regarding the potential sale of a Lafayette, Louisiana, courthouse under this authority rejected GSA’s interpretation of the law, ruling that property transferrable under the “35 Act” is subject to the surveying and reporting requirements of the Property Act and the McKinney-Vento Act. In addition, we found that GSA’s decision to dispose of old courthouses may be subject to change. For example, in Reno, Nevada, GSA initially decided to dispose of the old courthouse, but later decided to retain it after the judiciary raised objections and GSA further studied its decision. According to GSA, the agency had decided to dispose of the old courthouse because of the high cost of reinforcing the building against potential earthquake damage, but subsequently determined that it was the least costly alternative for housing the U.S. bankruptcy court, which was already occupying space in the building and preferred to stay. Moreover, the agency determined that tenants could reasonably accept the seismic risk of occupying the building through 2022. Other alternatives that GSA considered for housing the U.S. bankruptcy court in Reno included constructing a new building or expanding the existing new courthouse, which would likely have required additional congressional appropriations. Potential new owners of old courthouses face some challenges similar to those that GSA faced in re-using old courthouses. These challenges can affect the agency’s ability to dispose of the buildings. Representatives of the new owners of six old courthouses we reviewed told us that the buildings were being used––or will be used––for an art center, hotel, bank, affordable housing, church administration, and office space. In adapting the old courthouses to their current uses, these representatives told us they faced various challenges, including securing financing, making renovations, and meeting historic preservation requirements. However, several representatives said they were interested in acquiring the old courthouses for reasons such as their locations, architectural style, the quality of the construction materials, historic significance, and because they were able to purchase them at prices that were lower than the cost of constructing new buildings. In addition, two representatives said that historic preservation tax credits are sometimes an important incentive in redeveloping historic buildings that the government is disposing of. For example, the developer’s representative for the old courthouse in Tampa, which is being converted into a hotel, said his company specifically became interested in the building because it qualified for federal historic-preservation tax credits. Those tax credits authorize a 20 percent credit in any taxable year on qualified rehabilitation expenses with respect to certified historic structures. Below are examples of old courthouses converted for alternate uses. (Additional examples are provided in Appendix III.) Former U.S. Courthouse, Kansas City, Missouri, now the Courthouse The former U.S. Courthouse in Kansas City, Missouri, built in 1940, is now an apartment building (see fig.5). In 2008, after finding no other tenants or uses for the building, GSA conveyed it by PBC to a Kansas City redevelopment agency. The city worked with a developer to convert the old courthouse into an affordable-housing development that opened in 2011. An official from the city agency that acquired the old courthouse said the building’s conversion to affordable housing was part of downtown revitalization efforts. A representative of the building’s developer told us that the building has 176 loft-style apartments and offices of a law firm. The representative added that because the retained former courtrooms are not frequently used, the company is exploring having them used as a law library or as a venue for mock trials. Former George W. Whitehurst Federal Building in Ft. Myers, Florida, now the Sidney & Berne Davis Art Center The former George W. Whitehurst Federal Building in Ft. Myers, Florida, built in 1933, is now used as an art center and event space. In 2000, GSA sold the building to the City of Ft. Myers for $215,000 (see fig. 6). In 2003, after soliciting proposals to use the building, the city leased it to Florida Arts, Incorporated and is now known as the Sidney & Berne Davis Art Center. According to an art center representative, the building is now used for events such as musical, dance, and theatrical performances. The representative added that the two former courtrooms were not historic and were not retained. While old courthouses are often retained to meet federal space needs, potential renovations key to re-using old courthouses are often not included in GSA’s proposals to Congress for new courthouses. GSA is not specifically required by statute to include plans for old courthouses in its proposals to Congress for new buildings. According to GSA officials, it can be challenging to include these plans because new courthouses often take many years to complete and reliable cost estimates for renovations are not always available when they are proposed. However, GSA’s proposals are required under statute to include, among other things, a “comprehensive plan” to, in general, provide space for all federal employees in the locality of a proposed new building “having due regard” for suitable space that may continue to be available in nearby existing government buildings. In addition, OMB and we have previously reported that complete cost estimates are a best practice in capital planning, Moreover, while old courthouse plans may not be available when a new courthouse is initially proposed, GSA periodically updates Congress after the initial proposal to obtain additional authorizations. Since fiscal year 1993, Congress has appropriated a total of more than $760 million for courthouse renovations and with 12 new courthouses planned to replace or supplement existing courthouses, more funding requests for renovations will likely be forthcoming. GSA officials told us that renovations are often necessary to effectively reuse old courthouses and that several old courthouses were underutilized because they need renovating. We found that new courthouse proposals often included plans for the old courthouses, but few discussed whether renovations were needed to realize these plans. For 33 of the 40 old courthouses retained by GSA, the new courthouse proposals specified that the old courthouses would be reused for federal tenants. However, only 15 of the 40 of new courthouse proposals addressed whether renovations were needed in the old courthouse and only 11 included estimates of the renovation costs. Nearly all of the proposals that included a renovation cost were annexes to the old courthouses and, in such cases, costs related to courthouse renovation are often included in the cost of constructing the annex because GSA cannot separate out the costs. Among the retained old courthouses that we visited, seven required renovations to be reused; three still require renovations; and one neither had nor requires renovations. For seven of those we visited, the new courthouse proposal included no discussion of the need to renovate the existing courthouse. Moreover, for eight, the new courthouse proposal did not include discussion of federal tenants in commercially leased space near the old courthouses, and none included discussion of the long-term costs associated with federal tenants staying in commercially-leased space versus occupying space in the old courthouses. In contrast, we found that most of the 40 new courthouse proposals we reviewed included discussion of the 30-year costs associated with using commercially-leased space versus building a new courthouse. We have previously reported that leasing commercial space is often more costly than using government-owned space. Examples of old courthouses we visited for which the new courthouse proposal did not include discussions of renovations or federal tenants in nearby commercially-leased space include: Portland, Oregon: About 21 percent of the old courthouse (approximately 33,000 square feet) was vacant as of May 2013, including three courtrooms. (See fig. 7.) The new courthouse proposal specified that the U.S. bankruptcy court would move into the old courthouse. However, according to judiciary and GSA officials, this plan was contingent on renovation needs and costs, which we found were not included in the new courthouse proposal. These renovations have not been completed and the bankruptcy court instead leases space in a commercial building at an annual cost of about $1.3 million. Richmond, Virginia: About 15 percent of the old courthouse (approximately 26,000 square feet) was vacant as of May 2013. Although the new courthouse proposal specified that the existing courthouse would be used by the U.S. court of appeals, it did not specify that renovations would be needed to fully realize this plan. As a result, although the U.S. district court re-located from the building in 2008, a U.S. court of appeals office has remained in commercially- leased space in the city at an annual cost of about $362,000. In 2013, GSA requested $3.9 million to renovate the vacant space so that it can be reused by the U.S. court of appeals. Orlando, Florida: The new courthouse opened in 2007, but the old courthouse remained mostly vacant for several years until renovations not specified in the new courthouse proposal were completed. Specifically, according to GSA, about 85 percent of the old courthouse remained vacant until a $48 million renovation project was completed with funding provided under the American Recovery and Reinvestment Act of 2009. As of May 2013, the vacancy rate fell to about 23 percent (approximately 45,500 square feet). Judiciary officials noted that when new courthouse space is constructed as an annex, such as in Orlando, the old courthouse must be retained for use by the judiciary either in full or in part. The Orlando case illustrates the importance of planning for the old courthouse when the new courthouse will be an annex to the old building. In addition to the courthouses that we visited, we also found that other old courthouses were wholly or mostly vacant due to needed renovations not included in the new courthouse proposals. In Miami, the new courthouse proposal specified that the U.S. bankruptcy court and other tenants would move into the old courthouse. (See fig. 8.) However, according to GSA, renovation needs totaling about $60 million prevent this plan from proceeding, and, as a result, the building remains vacant. GSA indicated that more than $10 million would be required to separate the old courthouse in our review (David W. Dyer Federal Building and U.S. Courthouse) from another old courthouse built in 1983 (C. Clyde Atkins U.S. Courthouse), which share building systems, a common courtyard, and underground parking facility. In Buffalo, the new courthouse proposal specified that the U.S. bankruptcy court and other government tenants would relocate to the old courthouse. However, while tenants relocated from the old courthouse in 2011, it remains vacant pending $25 million in renovations that were not included in the new courthouse proposal. According to GSA, numerous federal tenants remained in commercially leased space in Buffalo, including the U.S. bankruptcy court, at an annual cost of about $360,000. In Columbia, South Carolina, about 73 percent of the old courthouse was vacant as of May 2013, including large areas designed for court use, which, according to GSA, would cost about $38 million to renovate for other uses. The new courthouse proposal specified that the U.S. bankruptcy court would move into the old courthouse, yet the bankruptcy court instead occupies another old federal courthouse, leaving GSA with court-configured space in the old courthouse that the agency has had difficulty re-using. Given the government’s multibillion-dollar investment in new courthouses and the challenges inherent to re-using or disposing of old courthouses, comprehensive planning regarding both the new and old courthouses is critical to ensure that federal operations are housed in the most cost- effective manner. We believe that comprehensive planning includes identifying challenges associated with re-using or disposing of the old courthouses, including renovation needs and estimated costs when the buildings are expected to be reused. By not consistently including the need for renovation and estimates of renovation costs in its new courthouse proposals, GSA is not providing Congress and other stakeholders with key information needed to make informed decisions about new courthouse projects. Although there may be challenges to providing accurate costs for future renovations to the old courthouses, estimates of these costs and, as necessary, periodic updates of changes in these costs would provide greater transparency to congressional decision makers regarding the full costs of courthouse projects. Further, although neither federal statute nor GSA specifically requires proposals for courthouses to include plans for old courthouses, federal statute does require GSA’s proposals to include a comprehensive plan considering space that may continue to be available in nearby existing government buildings. To the extent that the agency’s plans for housing federal tenants include using both the old and new courthouses, we believe such related renovation plans should be viewed as an integral part of the comprehensive plan. Moreover, when the plans involve re-locating federal tenants from commercially-leased space to the old courthouses, a comprehensive plan would include estimates regarding long-term costs versus continuing to use commercially-leased space. To improve the transparency of cost information regarding the retention and reuse of old courthouses, we recommend that when proposing new courthouses, the Administrator of the General Services Administration, in consultation with the judiciary as appropriate, include plans for re-using or disposing of the old courthouses; challenges with implementing those plans, including any required renovations and related cost estimates, to be updated as needed; and when the plans involve re-locating federal tenants from commercially- leased space to the old courthouses, estimates of the long-term costs of occupying the old courthouses versus continuing to occupy commercially leased space. We provided copies of a draft of this report to GSA and AOUSC for review and comment. GSA concurred with the recommendation and AOUSC agreed that GSA and the judiciary should continue to work together to address the judiciary’s housing needs, but indicated that it is important not to delay the authorization and funding of new projects. GSA’s letter can be found in Appendix IV. AOUSC’s letter can be found in Appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrator of GSA and the Director of the AOUSC. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were (1) how the government is re-using old courthouses that were retained and the challenges involved; (2) how GSA disposed of old courthouses, the process involved, and the results; and (3) the extent to which GSA’s proposals for new courthouses considered the future use of the old courthouses. To determine how the government is re-using old courthouses that were retained and the challenges involved, we collected and analyzed GSA data on the 66 old courthouses that were replaced or supplemented by new courthouses from 1993 to 2012. With regard to old courthouses that GSA retained, we reviewed GSA data on the buildings’ current uses, tenants, financial performance, and vacancy rates. For comparison purposes, we analyzed GSA data on the percentage of old courthouses that the agency categorized as “nonperforming” with the percentage in the agency’s national real estate portfolio during 2012. To determine how GSA disposed of old courthouses, the process involved, and the results, we reviewed the authorities that GSA used to dispose of the buildings, data on the length of time it took to dispose of them, the proceeds, and interviewed representatives of the new owners of selected disposed old courthouses. We attempted to compare how long it took for GSA, on average, to dispose of the old courthouses in our review with how long GSA took to dispose of all types of properties from its nationwide portfolio of federally-owned buildings, but were unable to make such a comparison due to data reliability issues. GSA’s data on disposal times for all properties in its portfolio included “holds” when the disposal times were suspended to account for situations that it deemed to be out of its control, such as pending legislation, litigation, environmental concerns, and historic preservation reviews. To attempt to review comparable data, we asked GSA to provide information on the length of holds and the reasons for the holds that were placed on the disposal times for old courthouses in our review. However, the data were incomplete, and in some cases, the explanations for the holds did not fall within the categories that GSA defined as being out of its control. We also reviewed other GSA data for completeness and determined that they were sufficiently reliable for the purposes of this report. To verify the data, we obtained information from GSA about how the data were collected, reviewed our prior evaluation of similar GSA data, and corroborated certain data with current and previous owners of old courthouses and through our research. We also interviewed GSA officials about the factors they considered when deciding whether to reuse or dispose of old courthouses and the challenges involved, and reviewed building retention and disposal studies and applicable laws, regulations, and agency policies. In order to provide greater insight on reuses and disposals of old courthouses, we focused on 17 old courthouses as case studies, including 13 that we visited (Boston, MA; Camden, NJ; Eugene, OR; Ft. Myers; FL; Orlando, FL; Portland, OR; Reno, NV; Richmond, VA; Sacramento, CA; Springfield, MA; Tallahassee, FL; Tampa, FL; and Trenton, NJ) and 4 about which we interviewed GSA officials by phone (Coeur d’Alene, ID; Greeneville, TN; Hammond, IN; and Kansas City, MO). For all of our case studies, we reviewed GSA data and other documents and interviewed GSA officials. In 10 locations, we also interviewed judges and judiciary officials about their use of the old courthouses or disposal of those buildings. We selected the 17 old courthouses that represented a mix of retained and disposed buildings located in geographically diverse areas. We interviewed GSA officials about the old courthouse in Coeur d’Alene, Idaho, because, among the 66 old courthouses in our review, it was the only instance in which a homeless services provider’s application to use the old courthouse was approved. We interviewed representatives of new owners of six old courthouses that were converted for alternate uses (Ft. Myers, FL; Greeneville, TN; Hammond, IN; Kansas City, MO; Springfield, MA; and Tampa, FL) about challenges involved in the disposal process and buildings’ reuse. In three locations (Ft. Myers, FL; Springfield, MA; and Tampa, FL) we also visited the former courthouses. Because this is a nonprobability sample, observations made based on our review of the 17 case studies do not support generalizations about other old courthouses. Rather, the observations provided specific, detailed examples of selected old courthouse reuses. To determine the extent to which GSA’s proposals for new courthouses built from 1993 through 2012 considered the future use of the old courthouses, we reviewed the proposals submitted to Congress for new courthouses in locations where the old courthouses were retained and appropriations made for renovating those old courthouses. We also reviewed pertinent laws, GSA regulations and policies on courthouse construction planning and space utilization, and prior GAO reports on courthouse construction and the cost of federal tenants’ use of leased versus federally-owned space. In addition, we interviewed GSA about new courthouse proposals and renovations needed to reuse old courthouse space. We conducted this performance audit from November 2012 through September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Old courthouse (year built) Frank M. Johnson Jr. Federal Building and U.S. Courthouse (1933) Disposal method (year disposed) Proceeds ($) Federal Building and U.S. Courthouse (1968) PBC (2012) County school administration (Central Office for the Tuscaloosa County School System) Richard Sheppard Arnold U.S. Post Office and Courthouse (1932) Phoenix Federal Building and U.S. Courthouse (1962) James A. Walsh Courthouse (1930) B.F. Sisk U.S. Courthouse (1968) PBC (2007) State courthouse (B.F. Sisk Courthouse) John E. Moss Federal Building (1961) Edward J. Schwartz Federal Building and U.S. Courthouse (1976) Santa Ana Federal Building (1975) Byron Rogers Federal Building and U.S. Courthouse (1965) (Mostly vacant due to major renovations) Elijah Barrett Prettyman U.S. Courthouse (1952) George W. Whitehurst Federal Building (1933) Sale (2000) Art center (Sidney & Berne Davis Art Center) U.S. Post Office and Courthouse (1933) Sale (2002) David W. Dyer Federal Building and U.S. Courthouse (1933) George C. Young Federal Building and Courthouse (1975) U.S. Courthouse (1937) U.S. Tampa Classic Courthouse (1905) PBC (2003) U.S. Post Office-Courthouse (1912) Sale (2002) Property exchange (2010) City government (City Hall) Federal Building and Courthouse (1928) PBC (2009) County courts (Juvenile Justice Building) Federal Building and U.S. Courthouse (1910) U.S. Courthouse (1977) Property exchange (2010) County courts (Winnebago County Juvenile Justice Center) Federal Building and US Courthouse (1904) PBC (1991) County cultural center (Springer Cultural Center) Federal Building and U.S. Courthouse (1906) Sale (2009) Church administration (First Baptist Church of Hammond Administrative Office Building) Federal Building-Post Office- Courthouse (1959) PBC (1995) County courts (Correctional and Court Services Building) Federal Building and U.S. Courthouse (1933) Lafayette Federal Building & Courthouse (1960) Sale (2001) John W. McCormack Post Office and U.S. Courthouse (1933) Federal Office Building (1982) Sale (2009) Sale (1999) State courts (Family Justice Center) Cape Girardeau Federal Building and Courthouse (1967) Sale (2012) U.S. Courthouse (1940) PBC (2008) Apartment building (Courthouse Lofts) U.S. Court and Customhouse (1935) Sale (1997) State courts and city offices (Carnahan Courthouse) James O. Eastland Federal Building – Courthouse (1933) Sale (2011) James F. Battin Federal Building and U.S. Courthouse (1965) Sale (2013) Federal Building/Courthouse (1931) Edward Zorinsky Federal Building (1960) James C. Cleveland Federal Building (1966) U.S. Post Office and Courthouse (1932) Clarkson S. Fisher U.S. Courthouse (1933) D. Chavez Federal Building (1965) Runnels Federal Building (1974) Foley Federal Building and U.S. Courthouse (1967) C. Clifton Young Federal Building and U.S. Courthouse (1965) Emmanuel Celler U.S. Courthouse (1963) Michael J. Dillon U.S. Courthouse (1936) Thurgood Marshall U.S. Courthouse (1936) Howard M. Metzenbaum U.S. Courthouse (1910) Thomas D. Lambros Federal Building and U.S. Courthouse (1995) Eugene Federal Building (1974) Gus J. Solomon Courthouse (1933) Federal Building and Courthouse (1938) William J. Nealon Federal Building and Courthouse (1931) Strom Thurmond U.S. Courthouse (1978) Federal Building-U.S. Courthouse (1904) Sale (2002) Commercial bank (Greeneville Federal Bank Main Office) Brownsville U.S. Post Office and Courthouse (1933) Property Exchange (1996) City government (City Hall/Old Federal Courthouse) Corpus Christi Courthouse (1918) Sale (2002) Post Office and U.S. Courthouse (1906) Martin V.B. Bostetter Courthouse (1931) Lewis F. Powell Jr. U.S. Courthouse and Annex (1858) William Kenzo Nakamura Courthouse (1940) Beckley Federal Building (1933) PBC (2001) Regional education service agencies (RESA 1 Building) Post Office and Courthouse (1961) Sale (1999) Federal Building and U.S. Courthouse (1907) GSA did not have information on whether financial proceeds were received regarding the property exchange involving the old courthouse in Brownsville. Former Federal Building and U.S. Courthouse, Hammond, Indiana, now the Administrative Office Building of the First Baptist Church of Hammond The former Federal Building and U.S. Courthouse in Hammond, Indiana, built in 1906, is now used by a church for office and meeting space (see fig. 9). In 2009, GSA sold the old courthouse to the First Baptist Church of Hammond for $550,000. According to a church representative, one of the building’s three former courtrooms is mainly used as a meeting room, but also for church services, weddings, and funerals, and that the other two former courtrooms have few or no remnants of their prior use. Former Federal Building-U.S. Courthouse, Greeneville, Tennessee, now the Greeneville Federal Bank Main Office The former Federal Building-U.S. Courthouse in Greeneville, Tennessee, built in 1904, is now being used as a bank (see fig.10). In 2002, GSA sold the old courthouse to the Greeneville Federal Bank for $200,000. According to a bank representative, of the building’s three former courtrooms, one is now used as an employee training room and the other two were reconfigured for lobby, teller, and conference room space. Former U.S. Tampa Classic Courthouse, in Tampa, Florida, and The former U.S. Tampa Classic Courthouse in Tampa, Florida, built in 1905, is being converted into a hotel (see fig. 11). In 2003, after finding no other tenants or uses for the building, GSA conveyed the old courthouse to the City of Tampa as a PBC. In 2012, the city leased the building to a developer that proposed to convert the building into a hotel. According to a representative from the developer, the hotel will have 130 rooms. The representative said that the building’s two historic former courtrooms will be used for a restaurant and ballroom/banquet facility. The hotel is expected to open in 2014. In addition to the contact named above, Keith Cunningham, Assistant Director; Lindsay Bach; Lorraine Ettaro; Geoffrey Hamilton; Bob Homan; and James Leonard made key contributions to this report. Federal Real Property: Excess and Underutilized Property Is an Ongoing Challenge. GAO-13-573T. Washington, D.C.: April 25, 2013. Federal Courthouses: Recommended Construction Projects Should Be Evaluated under New Capital-Planning Process. GAO-13-263. Washington, D.C.: April 11, 2013. Federal Courthouse Construction: Nationwide Space and Cost Overages Also Apply to Miami Project. GAO-13-461T. Washington, D.C.: March 8, 2013. Federal Real Property: Improved Data Needed to Strategically Manage Historic Buildings, Address Multiple Challenges. GAO-13-35. Washington, D.C.: December 11, 2012. Federal Buildings Fund: Improved Transparency and Long-term Plan Needed to Clarify Capital Funding Priorities. GAO-12-646. Washington, D.C.: July 12, 2012. Federal Real Property: National Strategy and Better Data Needed to Improve Management of Excess and Underutilized Property. GAO-12-645. Washington, D.C.: June 20, 2012. Federal Courthouse Construction: Better Planning, Oversight, and Courtroom Sharing Needed to Address Future Costs. GAO-10-417. Washington, D.C.: June 21, 2010. Federal Real Property: Authorities and Actions Regarding Enhanced Use Leases and Sale of Unneeded Real Property. GAO-09-283R. Washington, D.C.: February 17, 2009. Federal Real Property: Strategy Needed to Address Agencies’ Long- standing Reliance on Costly Leasing. GAO-08-197. Washington, D.C.: Jan. 24, 2008. Federal Courthouses: Rent Increases Due to New Space and Growing Energy and Security Costs Require Better Tracking and Management. GAO-06-613. Washington, D.C.: June 20, 2006. Courthouse Construction: Information on Project Cost and Size Changes Would Help to Enhance Oversight. GAO-05-673. Washington, D.C.: June 30, 2005. | During the last 20 years, GSA built 79 new courthouses for the judiciary that replaced or supplemented 66 old courthouses. Retaining and re-using or disposing old courthouses can be challenging for GSA because many of them are more than 80 years old, do not meet current court security standards, and have historic features that must be preserved by federal agencies in accordance with historic preservation requirements. GAO was asked to review how GSA and the judiciary are planning and managing the reuse or disposal of old courthouses. GAO examined (1) how the government is re-using old courthouses that were retained and the challenges involved; (2) how GSA disposed of old courthouses, the process involved, and the results; and (3) the extent to which GSA's proposals for new courthouses considered the future use of old courthouses. As case studies, we selected 17 old courthouses to represent a mix of retained and disposed buildings located in geographically diverse areas. Of the 66 old federal courthouses that GAO reviewed, the General Services Administration (GSA) retained 40, disposed of 25, and is in the process of disposing of another. Of the retained old courthouses, the judiciary occupies 30 of them, 25 as the main tenant, most commonly with the district and bankruptcy courts. When determining whether to retain and reuse or to dispose of old courthouses, GSA considers, among other things, a building's condition, the local real estate market, and the existing and projected base of federal tenants. GSA officials said that after the judiciary moves to new courthouses, old courthouses often require renovations to be reused. Moreover, GSA officials said that it can be challenging to find new tenants for old courthouses due to the buildings' condition and needed renovations, among other reasons. Among the retained old courthouses GAO reviewed, excluding one building that was under major renovation, about 14 percent of the total space (nearly 1-million square feet) in them was vacant as of May 2013--significantly higher than the 4.8 percent overall vacant space in federally-owned buildings in 2012. GAO found that GSA took about 1.4 years to dispose of old courthouses that the agency determined were no longer needed. GSA officials told us that multiple parties' interest in re-using the old courthouses, the historic status of many buildings, and their specialized designs can slow the disposal process. GSA is not specifically required by statute to include plans for old courthouses in its proposals to Congress for new courthouses. However, as with other building proposals over a certain dollar threshold, GSA is required to include, among other things, a "comprehensive plan" to provide space for all federal employees in the area, considering suitable space that may be available in nearby existing government buildings. In addition, GAO and the Office of Management and Budget have previously reported that complete cost estimates are a best practice in capital planning. GAO found that renovations needed to reuse the old courthouses, totaling over $760 million to date, were often not included in GSA's new courthouse proposals. Specifically, for 33 of the 40 retained old courthouses, the new courthouse proposals described plans for reuse by federal tenants, but only 15 proposals specified whether renovations were needed to realize these plans, and only 11 included estimates of the renovation costs. GAO found that some old courthouses were partially or wholly vacant while awaiting renovation funding, sometimes resulting in money spent leasing space in commercial buildings for the judiciary. In proposing new courthouses, GSA, in consultation with the judiciary, should include plans for re-using or disposing of old courthouses, any required renovations and the estimated costs, and any other challenges to re-using or disposing of the buildings. GSA concurred with the recommendation and AOUSC agreed that GSA and the judiciary should work together to address the judiciary's housing needs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FAMS was originally established as the Sky Marshal program in the 1970s to counter hijackers. In response to 9/11, the Aviation and Transportation Security Act expanded FAMS’s mission and workforce and mandated the deployment of federal air marshals on high-security risk flights. Within the 10-month period immediately following 9/11, the number of air marshals grew significantly. Also, during subsequent years, FAMS underwent various organizational transfers. Initially, FAMS was transferred within the Department of Transportation from the Federal Aviation Administration to the newly created TSA. In March 2003, FAMS moved, along with TSA, to the newly established DHS. In November 2003, FAMS was transferred to U.S. Immigration and Customs Enforcement (ICE). Then, about 2 years later, FAMS was transferred back to TSA in the fall of 2005. FAMS deploys thousands of federal air marshals to a significant number of daily domestic and international flights. In carrying out this core mission of FAMS, air marshals are deployed in teams to various passenger flights. Such deployments are based on FAMS’s concept of operations, which guides the agency in its selection of flights to cover. Once flights are selected for coverage, FAMS officials stated that they must schedule air marshals based on their availability, the logistics of getting individual air marshals in position to make a flight, and applicable workday rules. At times, air marshals may have ground-based assignments. On a short- term basis, for example, air marshals participate in Visible Intermodal Prevention and Response (VIPR) teams, which provide security nationwide for all modes of transportation. After the March 2004 train bombings in Madrid, TSA created and deployed VIPR teams to enhance security on U.S. rail and mass transit systems nationwide. Comprised of TSA personnel that include federal air marshals—as well as transportation security inspectors, transportation security officers, behavioral detection officers, and explosives detection canines—the VIPR teams are intended to work with local security and law enforcement officials to supplement existing security resources, provide a deterrent presence and detection capabilities, and introduce an element of unpredictability to disrupt potential terrorist activities. FAMS’s budget request for fiscal year 2010 is $860.1 million, which is an increase of $40.6 million (or about 5 percent) over the $819.5 million appropriated in fiscal year 2009. The majority of the agency’s budget provides for the salaries of federal air marshals and supports maintenance of infrastructure that includes 21 field offices. FAMS’s operational approach (concept of operations) for achieving its core mission is based on assessments of risk-related factors, since it is not feasible for federal air marshals to cover all of the approximately 29,000 domestic and international flights operated daily by U.S. commercial passenger air carriers. Specifically, FAMS considers the following risk- related factors to help ensure that high-risk flights operated by U.S. commercial carriers—such as the nonstop, long-distance flights targeted on 9/11—are given priority coverage by federal air marshals: Threat (intelligence): Available strategic or tactical information affecting aviation security is considered. Vulnerabilities: Although FAMS’s specific definition is designated sensitive security information, DHS defines vulnerability as a physical feature or operational attribute that renders an entity open to exploitation or susceptible to a given hazard. Consequences: FAMS recognizes that flight routes over certain geographic locations involve more potential consequences than other routes. FAMS attempts to assign air marshals to provide an onboard security presence on as many of the flights in the high-risk category as possible. FAMS seeks to maximize coverage of high-risk flights by establishing coverage goals for 10 targeted critical flight categories. In order to reach these coverage goals, FAMS uses a scheduling process to determine the most efficient flight combinations that will allow air marshals to cover the desired flights. FAMS management officials stressed that the overall coverage goals and the corresponding flight schedules of air marshals are subject to modification at any time based on changing threat information and intelligence. For example, in August 2006, FAMS increased its coverage of international flights in response to the discovery, by authorities in the United Kingdom, of specific terrorist threats directed at flights from Europe to the United States. FAMS officials noted that a shift in resources of this type can have consequences because of the limited number of air marshals. The officials explained that international missions require more resources than domestic missions partly because the trips are of longer duration. In addition to the core mission of providing an onboard security presence on selected flights, FAMS also assigns air marshals to VIPR teams on an as-needed basis to provide a ground-based security presence. For the first quarter of fiscal year 2009, TSA reported conducting 483 VIPR operations, with about 60 percent of these dedicated to ground-based facilities of the aviation domain (including air cargo, commercial aviation, and general aviation) and the remaining VIPR operations dedicated to the surface domain (including highways, freight rail, pipelines, mass transit, and maritime). TSA’s budget for fiscal year 2009 reflects support for 225 VIPR positions at a cost of $30 million. TSA plans to significantly expand the VIPR program in fiscal year 2010 by adding 15 teams consisting of 338 positions at a cost of $50 million. However, questions have been raised about the effectiveness of the VIPR program. In June 2008, for example, the DHS Office of Inspector General reported that although TSA has made progress in addressing problems with early VIPR deployments, it needs to develop a more collaborative relationship with local transit officials if VIPR exercises are to enhance mass transit security. After evaluating FAMS’s operational approach for providing an onboard security presence on high-risk flights, the Homeland Security Institute, a federally funded research and development center, reported in July 2006 that the approach was reasonable. In its report, the Homeland Security Institute noted the following regarding FAMS’s overall approach to flight coverage: FAMS applies a structured, rigorous approach to analyzing risk and allocating resources. The approach is reasonable and valid. No other organizations facing comparable risk-management challenges apply notably better methodologies or tools. As part of its evaluation methodology, the Homeland Security Institute examined the conceptual basis for FAMS’s approach to risk analysis. Also, the institute examined FAMS’s scheduling processes and analyzed outputs in the form of “coverage” data reflecting when and where air marshals were deployed on flights. Further, the Homeland Security Institute developed and used a model to study the implications of alternative strategies for assigning resources. We reviewed the institute’s evaluation methodology and generally found it to be reasonable. Although the institute’s July 2006 report concluded that FAMS’s operational approach was reasonable and valid, the report also noted that certain types of flights were covered less often than others. Accordingly, the institute made recommendations for enhancing the operational approach. For example, the institute recommended that FAMS increase randomness or unpredictability in selecting flights and otherwise diversify the coverage of flights. To address the Homeland Security Institute’s recommendations, FAMS officials stated that a broader approach for determining which flights to cover has been implemented—an approach that opens up more flights for potential coverage, provides more diversity and randomness in flight coverage, and extends flight coverage to a variety of airports. Our January 2009 report noted that FAMS had implemented or had ongoing efforts to implement the institute’s recommendations. We reported, for example, that FAMS is developing an automated decision-support tool for selecting flights and that this effort is expected to be completed by December 2009. To better understand and address operational and quality-of-life issues affecting the FAMS workforce, the agency’s previous Director—who served in that capacity from March 2006 to June 2008—established various processes and initiatives. Chief among these were 36 issue-specific working groups to address a variety of topics, such as tactical policies and procedures, medical or health concerns, recruitment and retention practices, and organizational culture. Each working group typically included a special agent-in-charge, a subject matter expert, air marshals, and mission support personnel from the field and headquarters. According to FAMS management, the working groups typically disband after submitting a final report, but applicable groups could be reconvened or new groups established as needed to address relevant issues. The previous Director also established listening sessions that provided a forum for employees to communicate directly with senior management and an internal Web site for agency personnel to provide anonymous feedback to management. Another initiative implemented was assigning an air marshal to the position of Ombudsman in October 2006 to provide confidential, informal, and neutral assistance to employees to address workplace- related problems, issues, and concerns. These efforts have produced some positive results. For example, as noted in our January 2009 report, FAMS amended its policy for airport check-in and flight boarding procedures (effective May 15, 2008) to better ensure the anonymity of air marshals in mission status. In addition, FAMS modified its mission scheduling processes and implemented a voluntary lateral transfer program to address certain issues regarding air marshals’ quality of life—and has plans to further address health issues associated with varying work schedules and frequent flying. Also, our January 2009 report noted that FAMS was taking steps to procure new personal digital assistant communication devices—to replace the current, unreliable devices—and distribute them to air marshals to improve their ability to communicate effectively with management while in mission status. All of the 67 air marshals we interviewed in 11 field offices commented favorably about the various processes and initiatives for addressing operational and quality-of-life issues, and the air marshals credited the leadership of the previous FAMS Director. The current FAMS Director, as noted in our January 2009 report, has expressed a commitment to sustain progress and reinforce a shared vision for workforce improvements by continuing applicable processes and initiatives. In our January 2009 report, we also noted that FAMS plans to conduct a workforce satisfaction survey of all employees every 2 years, building upon an initial survey conducted in fiscal year 2007, to help identify issues affecting the ability of its workforce to carry out its mission. We reported that a majority (79 percent) of the respondents to the 2007 survey indicated that there had been positive changes from the prior year, although the overall response rate (46 percent) constituted less than half of the workforce. The 46 percent response rate was substantially less than the 80 percent rate encouraged by the Office of Management and Budget (OMB) in its guidance for federal surveys that require its approval. According to the OMB guidance, a high response rate increases the likelihood that the views of the target population are reflected in the survey results. We also reported that the 2007 survey’s results may not provide a complete assessment of employees’ satisfaction because 7 of the 60 questions in the 2007 survey questionnaire combined two or more issues, which could cause respondents to be unclear on what issue to address and result in potentially misleading responses, and none of the 60 questions in the 2007 survey questionnaire provided for response options such as “not applicable” or “no basis to judge”— responses that would be appropriate when respondents had little or no familiarity with the topic in question. In summary, our January 2009 report noted that obtaining a higher response rate to FAMS’s future surveys and modifying the structure of some questions could enhance the surveys’ potential usefulness by, for instance, providing a more comprehensive basis for assessing employees’ attitudes and perspectives. Thus, to increase the usefulness of the agency’s biennial workforce satisfaction surveys, we recommended that the FAMS Director take steps to ensure that the surveys are well designed and that additional efforts are considered for obtaining the highest possible response rates. Our January 2009 report recognized that DHS and TSA agreed with our recommendation and noted that FAMS was in the initial stages of formulating the next workforce satisfaction survey. More recently, by letter dated July 2, 2009, DHS informed applicable congressional committees and OMB of actions taken in response to our recommendation. The response letter noted that agency plans include (1) ensuring that questions in the 2009 survey are clearly structured and unambiguous, (2) conducting a pretest of the 2009 survey questions, and (3) developing and executing a detailed communication plan. Federal air marshals are an important layer of aviation security. FAMS, to its credit, has established a number of processes and initiatives to address various operational and quality-of-life issues that affect the ability of air marshals and other FAMS personnel to perform their aviation security mission. The current FAMS Director has expressed a commitment to continue relevant processes and initiatives for identifying and addressing workforce concerns, maintaining open lines of communications, and sustaining progress. Similarly, this hearing provides an opportunity for congressional stakeholders to focus a dialogue on how to sustain progress at FAMS. For example, relevant questions that could be raised include the following: In implementing the agency’s concept of operations, how effectively does FAMS use new threat information and intelligence to modify flight coverage goals and the corresponding flight schedules of air marshals? In managing limited resources to mitigate a potentially unlimited range of security threats, how does FAMS ensure that federal air marshals are allocated appropriately for meeting in-flight security responsibilities as well as supporting new ground-based security responsibilities, such as VIPR team assignments? What cost-benefit analyses, if any, are being used to guide FAMS decision makers? To what extent have appropriate performance measures been developed for gauging the effectiveness and results of resource allocations and utilization? How does FAMS foster career sustainability for federal air marshals given that maintaining an effective operational tempo is not necessarily compatible with supporting a better work-life balance? These types of questions warrant ongoing consideration by FAMS management and continued oversight by congressional stakeholders. Mr. Chairman, this completes my prepared statement. I look forward to answering any questions that you or other members of the subcommittee may have. For information about this statement, please contact Steve Lord, Director, Homeland Security and Justice Issues, at (202) 512-4379, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions to this testimony include David Alexander, Danny Burton, Katherine Davis, Mike Harmond, and Tom Lombardi. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | By deploying armed air marshals onboard selected flights, the Federal Air Marshal Service (FAMS), a component of the Transportation Security Administration (TSA), plays a key role in helping to protect approximately 29,000 domestic and international flights operated daily by U.S. air carriers. This testimony discusses (1) FAMS's operational approach or "concept of operations" for covering flights, (2) an independent evaluation of the operational approach, and (3) FAMS's processes and initiatives for addressing workforce-related issues. Also, this testimony provides a list of possible oversight issues related to FAMS. This testimony is based on GAO's January 2009 report (GAO-09-273), with selected updates in July 2009. For its 2009 report, GAO analyzed policies and procedures regarding FAMS's operational approach and a July 2006 classified assessment of that approach. Also, GAO analyzed employee working group reports and related FAMS's initiatives for addressing workforce-related issues, and interviewed FAMS headquarters officials and 67 air marshals (selected to reflect a range in levels of experience). Because the number of air marshals is less than the number of daily flights, FAMS's operational approach is to assign air marshals to selected flights it deems high risk--such as the nonstop, long-distance flights targeted on September 11, 2001. In assigning air marshals, FAMS seeks to maximize coverage of flights in 10 targeted high-risk categories, which are based on consideration of threats, vulnerabilities, and consequences. In July 2006, the Homeland Security Institute, a federally funded research and development center, independently assessed FAMS's operational approach and found it to be reasonable. However, the institute noted that certain types of flights were covered less often than others. The institute recommended that FAMS increase randomness or unpredictability in selecting flights and otherwise diversify the coverage of flights within the various risk categories. In its January 2009 report, GAO noted that the Homeland Security Institute's evaluation methodology was reasonable and that FAMS had taken actions (or had ongoing efforts) to implement the institute's recommendations. To address workforce-related issues, FAMS's previous Director, who served until June 2008, established a number of processes and initiatives, such as working groups, listening sessions, and an internal Web site for agency personnel to provide anonymous feedback to management. These efforts have produced some positive results. For example, FAMS revised its policy for airport check-in and aircraft boarding procedures to help protect the anonymity of air marshals in mission status, and FAMS modified its mission scheduling processes and implemented a voluntary lateral transfer program to address certain quality-of-life issues. The air marshals GAO interviewed expressed satisfaction with FAMS's efforts to address workforce-related issues. The current FAMS Director has expressed a commitment to continue applicable processes and initiatives. Also, FAMS has plans to conduct a workforce satisfaction survey of all employees every 2 years, building upon an initial survey conducted in fiscal year 2007. GAO's review found that the potential usefulness of future surveys could be enhanced by ensuring that the survey questions and the answer options are clearly structured and unambiguous and that additional efforts are considered for obtaining the highest possible response rates. To its credit, FAMS has made progress in addressing various operational and quality-of-life issues that affect the ability of air marshals to perform their aviation security mission. However, sustaining progress will require ongoing consideration by FAMS management--and continued oversight by congressional stakeholders--of key questions, such as how to foster career sustainability for air marshals given that maintaining an effective operational tempo can at times be incompatible with supporting a work-life balance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Army has three large combat training centers that train brigade-sized units during exercises, referred to as “rotations,” that last for 13 to 25 days: the National Training Center, located at Fort Irwin, California; the Joint Readiness Training Center, located at Fort Polk, Louisiana; and the Combat Maneuver Training Center, located at Hohenfels, Germany. Figure 1 illustrates NBC training being conducted at the Army’s Combat Maneuver Training Center in Hohenfels, Germany. The Marine Corps has an Air Ground Combat Center at Twentynine Palms, California, where it trains brigade-sized units in a combined arms exercise that similarly allows Marine Corps units to train to perform their missions in large maneuver areas and to fire their ground and air weapons. “A CTC experience is the closest thing to combat the Army’s soldiers, leaders, staffs and units ever experience. It is a battlefield where soldiers can die, come back to life, correct their mistakes, and fight again. . . . the Army must look at harnessing the role of the CTCs in developing doctrine and collecting data so it can maximize their potential and draw the right conclusions from lessons learned in a training environment.” During fiscal years 2002 and 2003, 57 active and reserve component rotations took place at the three Army CTCs. Rotation costs are significant: In 1999 we reported that the Army spent about $1 billion a year to provide training at the NTC, the JRTC, and the CMTC. These centers are equipped with instrumentation and simulators that allow the units to have their battle effectiveness measured, recorded, and commented on by observers/controllers, who are Army subject-matter experts for NBC defense and other mission areas. During fiscal years 2002 and 2003, approximately 12 active and reserve battalion-sized Marine units underwent combined arms exercises at Twentynine Palms. DOD, the Army, and the Marine Corps have all stressed the importance of fully integrating NBC scenarios into their training exercises, whether conducted at a unit’s home station, at a CTC, or elsewhere. The U.S. National Strategy to Combat Weapons of Mass Destruction acknowledges that NBC weapons in the possession of hostile states and terrorists represent one of the greatest security challenges facing the United States. At the DOD level, Joint Publication 3-11, Joint Doctrine for Operations in Nuclear, Biological, and Chemical (NBC) Environments, states that “US forces must be prepared to conduct and sustain operations in NBC environments with minimal degradation” and urges that individuals and organizations train often and realistically while wearing NBC protective clothing so that they are better prepared for the constraints it imposes on communication, vision, and movement. Army and Marine Corps regulations, orders, and doctrine similarly stress the importance of fully integrating NBC scenarios into training exercises. For example, Army Regulation 350-1, “Army Training and Education,” which establishes Army-wide baseline NBC defense training policy, requires that NBC defense tasks, such as contamination avoidance, protection, and decontamination, be fully integrated into units’ mission training, including field training exercises. Specifically, Army Regulation 350-1 states that “The NBC defense training must be fully integrated into unit exercises . . . for both offensive and defensive operations.” This integration is intended to develop and test the capability of commanders, staffs, and units to perform their missions under extended NBC conditions. In other words, NBC skills are not seen as isolated tasks, but NBC defense is viewed as a condition under which units should be able to do their mission-essential tasks. Similarly, Marine Corps Order 3400.3F, paragraph 6, establishes Marine Corps-wide baseline NBC defense training requirements and states that “Every unit and commander will fully integrate NBCD training into every combat, combat support, combat service support, and command and control exercise during offensive and defensive operations, to include live fire evolutions.” Like the Army, the Marine Corps intends to integrate NBC training into its exercises in order to develop and test the ability of Marines at all levels not only to survive an NBC attack but to perform their missions under NBC conditions. Army and Marine Corps regulations and orders also require after-action reporting for unit training exercises, including those that occur at the CTCs. The Army believes that it is important to capture lessons learned during training in order to identify combat-relevant lessons learned that will enhance the Army’s ability to perform its missions and that will support tailored training for anticipated conditions of combat. Army regulations for the JRTC and the CMTC state that NBC defense training should be addressed in every training unit commander’s after-action report, but guidance for the NTC and the overall Army lessons learned program does not. Like Army regulations, Marine Corps orders state that after-action reports should be prepared for all training exercises and maintained in a central lessons learned facility. The Marine Corps uses training lessons learned to identify unit strengths and weaknesses that must be addressed for the overall benefit of the Marine Corps. Although the Army and the Marine Corps stress in their doctrine, regulations, or orders the need to fully integrate NBC training into training exercises and both have defined what they consider to be essential NBC skills, neither has established minimum NBC tasks for units to perform while they are training at the CTCs. They believe that it is important to leave decisions on the amount and type of training that occur at the CTCs to commanders. Consequently, during fiscal years 2002 and 2003, Army and Marine Corps units and personnel attending the CTCs received widely varying amounts of NBC training, with some receiving little or none. Furthermore, Army units that do undergo NBC training at the CTCs often do not perform to the proficiency levels defined by the Army as acceptable. Based on commanders’ discretion, both services’ CTC exercises currently are oriented toward preparing units for operations in Iraq and Afghanistan and do not emphasize NBC defense training. Because of this variation in NBC training at the CTCs, the Army and the Marine Corps often miss the unique opportunity offered by the CTCs to be assured through objective observer/controller assessments that every servicemember who trains at a CTC has training in a minimum number of NBC tasks essential to survive and perform in an NBC-contaminated environment. Both the Army and the Marine Corps have defined in various publications what they believe are the essential NBC skills that all soldiers and Marines should have. Also, as described in the background section of this report, both services stress in their doctrine, regulations, or orders the need to fully integrate NBC defense training into their exercises. The Army has defined what it considers are the NBC skills essential for soldiers to know in its Army Universal Task List. Army commanders select training tasks, including NBC training tasks, from this and other task lists. For each task, the Army provides an extended definition, along with suggested ways to measure a soldier’s proficiency in doing the task. For example, for the task of using individual and collective NBC protective equipment, one measure a commander may select to evaluate a soldier’s competence includes the time it takes a soldier to don chemical protective gear in response to enemy use of NBC weapons. In addition, the Army requires that units conduct weapons qualifications on individual and crew- served weapons with personnel wearing chemical protective equipment. Neither the task list nor the regulation specifies where such training is to be conducted. U.S. Forces Command, which oversees the training and readiness of U.S.- based Army operational forces, has issued a list of predeployment NBC tasks, but it also does not specify where training for these tasks must take place. Forces Command directs that soldiers spend approximately 8 hours per quarter under NBC defense conditions. These tasks are all in the Army’s most basic NBC skill level category and include wearing and maintaining chemical protective equipment and identifying chemical agents. Like the Army, the Marine Corps has defined what it considers to be NBC tasks essential for Marines to know, both to survive an NBC attack and to continue performing the unit’s mission. In Marine Corps Order 3400.3F, “Nuclear, Biological, and Chemical Defense (NBCD) Training,” the Marine Corps lists essential individual survival standards, such as maintaining and wearing protective chemical equipment, detecting chemical agents, and decontaminating one’s skin and equipment. It also lists essential “basic operating standards,” such as using crew and personal weapons while wearing NBC protective gear, maintaining NBC equipment, avoiding contamination while continuing the mission, and decontaminating units if necessary. The order does not state that any of these tasks must be included in exercises such as the combined arms exercise at Twentynine Palms. Appendix II provides a listing of Army and Marine Corps definitions of essential NBC skills. NBC training at the Army’s CTCs varies widely, and many Army subunits receive little NBC training at the CTCs. For example, in fiscal years 2002 and 2003, observers/controllers from the NTC and the JRTC estimated that only about 5 percent of soldiers underwent NBC training during a brigade rotation that required them to wear their full protective gear for at least 18 hours. This is because Army regulations do not mandate that NBC training must occur at the CTCs, leaving commanders to decide what skills training to include in the unit’s CTC rotation. For the NBC training that did occur at the CTCs, observers/controllers frequently reported that the units did not perform even basic NBC tasks to the level of proficiency defined as acceptable by the Army. During our review of Army CTC training that occurred during fiscal years 2002 and 2003, we found that, while most units were exposed to some NBC training at the CTCs, the overall percentage of Army battalion- or brigade-sized units that received extensive NBC training during a rotation was small. One measure of intensive unit training under NBC defense conditions is the extent to which soldiers are required to dress and operate for extended periods of time in their individual protective clothing, including their masks and gloves. NTC training officials estimated that, on average since fiscal year 2002, a typical 20- to 25-day brigade rotation—which may include up to 4,000 soldiers—includes NBC events that cause the entire unit to don the full chemical protective suit for a total of 2 to 3 hours and about 150 to 200 soldiers to train in full protective gear for a total of 18 to 24 hours. In other words, only about 5 percent of the brigade is affected by NBC training that requires wearing full protective gear for more than 2 to 3 hours. Similarly, an Army JRTC training official reported that during a typical brigade rotation, an average of only 200 soldiers operate in full protective gear for a total of 16 to 20 hours. The number of personnel who receive this training at the JRTC ranges from as few as 50 soldiers up to 400 or more, depending on the type of contamination and the location of the attack, and the time that a soldier spends in protective gear can range from as little as 1 hour to as much as 48 hours. Because Army regulations do not state what NBC training must occur at the CTCs, the commander of the unit to be trained may choose not to emphasize it during the unit’s CTC rotation. Typically, up to 180 days before the rotation is to start, the brigade commander, in coordination with the division or other senior commander, begins to coordinate with the CTC to specify what training objectives will be included in the unit’s training rotation. A unit rotation traditionally emphasizes the warfighting skills a unit requires to perform its mission and combat operations. Because training to survive and operate under potential NBC conditions is generally treated as a condition of training for all mission-essential tasks for units, rather than as a separate mission task, the CTCs, which develop the training scenarios, generally propose some types of NBC conditions in all rotations. However, unit commanders may specify that a CTC include more or fewer NBC conditions in training scenarios. During fiscal years 2002 and 2003, the Army’s CTCs generally included three to seven chemical events in each standard rotation’s training scenarios. A particular chemical attack by an “enemy” is generally targeted at a specific area of the simulated battlefield and thus involves those units that may be affected by a chemical attack in that area. Chemical events during fiscal years 2002 and 2003 included the simulated use of chemicals that were categorized as “persistent” (defined as lasting for 24 hours or more) and “nonpersistent” (defined as lasting for 24 hours or less) and that were delivered by “enemy” artillery, rockets, aircraft bombs, truck bombs, rucksack bombs, and spray. At the NTC and the CMTC, observers/controllers use CS (tear) gas to simulate chemical agents. Flares, ground-burst simulators, air-burst simulators, or spray tanks mounted on helicopters may also be used to simulate enemy chemical weapons. At the JRTC and the NTC, observers/controllers also frequently simulate a biological event by such means as simulating that the “enemy” has sabotaged the water supply by poisoning it with a biological contaminant. The CTCs have increasingly emphasized training rotations specifically tailored to preparing units for expected deployments. These rotations might or might not include chemical or biological events. Many of the units completing the tailored rotations at the Army’s CTCs in fiscal years 2002 and 2003 later deployed for combat operations in Afghanistan or Iraq. NBC defense training at CTCs has been emphasized less for units training for Bosnia and Kosovo or for Afghanistan and Iraq after NBC weapons were not found there. Because the NBC defensive training for each soldier varies so widely at the CTCs, the Army continues to have no assurance that all servicemembers attending a CTC have trained on a minimum number of essential NBC tasks. Our review of after-action reports from the three Army CTCs for fiscal years 2002 and 2003 indicated that units frequently arrived at the CTCs at the beginning of their training periods without having mastered basic NBC skills. Observers/controllers frequently comment on units’ NBC skills when they first arrive at training at the NTC to assess the units’ needed level of NBC training and note that, often, units do not perform even basic NBC tasks to the level of proficiency that the Army defines as acceptable. Observers/controllers at all three CTCs noted that because units had not adequately prepared for basic NBC training at their home stations, they were not able to fully train on the more sophisticated collective and mission tasks under NBC conditions that could be practiced at the CTCs. Of the three CTCs, the NTC had the most complete information on the NBC skills of the units being trained during fiscal years 2002 and 2003. Unlike the other CTCs, the NTC often uses a standard format to assess incoming units on six basic NBC tasks while they are receiving their equipment and assembling to begin training. For example, one of these early NTC training scenarios subjects a brigade arriving at a deployment destination to an attack by a chemical weapon. Table 1 summarizes the assessments made by NTC observers/controllers of the NBC skills of brigades that arrived for training during fiscal years 2002 and 2003. The table lists the six NBC tasks assessed at the NTC and shows whether the brigades did or did not perform the tasks to the level of proficiency defined as acceptable by the Army. Most brigades failed to perform to standard NBC tasks 3, 4, and 6, which are ranked at the most basic skill level, called skill level 1. We were unable to compile summaries, such as the NTC summary in table 1, of how well brigades did in basic NBC tasks at the JRTC and the CMTC because these centers did not routinely assess and collect this information. However, JRTC and CMTC after-action reports frequently noted deficiencies in units’ NBC training attributable to their incomplete preparation at home stations. For example, for several rotations for fiscal years 2002 and 2003, JRTC observers/controllers reported that soldiers and leaders lacked training and knowledge of critical NBC tasks. Observers/controllers recommended that units “Develop an NBC training plan at home station that addresses the individual, leader, and collective soldier skills necessary to sustain operations in an NBC environment.” A similar CMTC recommendation called for “more emphasis on NBC training and integration at home station.” The observation that units do not get adequate NBC training at their home stations is not new and has been repeatedly reported by DOD and the Army. In 1998, for example, the DOD Office of the Inspector General reported that unit commanders generally were not fully integrating chemical and biological defense into their units’ collective mission training exercises. The report noted that “units rarely trained for their mission- essential tasks under conditions.” In 2002, the Army Audit Agency reported that it had evaluated training for chemical and biological defense provided to soldiers at the unit level and found that this training needed to be more effectively integrated and supplemented. In DOD’s 2002 report to Congress on its Chemical and Biological Defense Program, the department stated that the Army’s CTCs continued to see units at the company, battalion, and brigade levels that were unable to perform all NBC tasks to standard. The report concluded that this less-than-satisfactory performance at the CTCs was directly attributable to a lack of home-station NBC training. The report stated the need for increased emphasis in educating senior leaders on the necessity for NBC training and expressed concern that NBC training consist not only of NBC survival but also of continuous operations in an NBC environment. We have also reported for more than a decade on problems with Army units’ inadequate home-station training. In 1991, we reported that Army home-station training lacked realism and often did not include NBC training. In 1996, we reported that officials from Army major commands, corps, divisions, and individual units said that chemical and biological defense skills not only tended to be difficult to attain and were highly perishable but were also often given a lower priority because of, among other things, too many other higher priority taskings. In 1999, we noted that training units lacked proficiency when they arrived at the training centers, and as a result, the content of the CTC training was frequently modified to provide less challenging scenarios than would normally be expected. We also reported that, although units should have been proficient at battalion-level tasks when they arrived at the CTCs, many had trained only up to company level, and the units’ leaders struggled with the more complicated planning and synchronization tasks required for the battalion- and brigade-level exercises conducted at the centers. No NBC training was conducted during combined arms exercises at the Marine Corps’ training center at Twentynine Palms for at least 5 years prior to our review. While Marine Corps orders and doctrine emphasize the need to include NBC defense training in combined arms exercises, they do not provide any clearly articulated NBC defense training tasks or requirements that must be accomplished in conjunction with these exercises. In the absence of specific training requirements, NBC defense training has historically been left up to the discretionary control of the unit commander, and Marine Corps commanders decided to remove it to make room for other training. According to a Marine Corps training official, unit commanders gave several reasons that NBC defense training at the combined arms exercise was given a lower priority, including that it was difficult to perform tasks in cumbersome and uncomfortable protective gear, chemical training was time-consuming, and the likelihood of NBC warfare was perceived as low. In November 2001, the Naval Audit Service issued a report on infantry and armor readiness in the Marine Corps. One of its findings was that the Marine Corps was not fully integrating chemical and biological training into its collective unit exercises in a consistent manner. The Naval Audit Service attributed this condition to the fact that Marine Corps officers did not consider chemical and biological training a high priority, even though they considered it important. One of the Naval Audit Service’s recommendations was for the Marine Corps to “integrate [chemical and biological defense] training into unit field exercises under realistic conditions, and insure that training is appropriately integrated into such major events as Combined Armed Exercises . . . .” In a February 2004 memorandum to the Commandant of the Marine Corps, the Commanding General of the Marine Corps Training and Education Command stated that in response to the Naval Audit Service’s recommendation, NBC training and assessment had been added to the formal schedule at the combined arms exercise program in January 2004. The memorandum stated that “Due to world events, it continues to be a challenge concerning the ‘full integration’ of Nuclear, Biological, Chemical Defense training into unit exercise programs.” In 2003, in response to the Naval Audit Service’s recommendation, the Marine Corps began its planning for introducing NBC training into the combined arms exercises at Twentynine Palms. In that year, the Marine Corps assigned two NBC staff specialists to Twentynine Palms to begin devising a training plan for the combined arms exercise program. Also, chemical protective equipment was obtained for use at Twentynine Palms by rotating Marine Corps units. In January 2004, the Marine Corps introduced NBC defense classroom courses and one field exercise into the combined arms exercise program. Appendix III provides a listing of the classroom NBC courses that were introduced in the first week of rotations in fiscal year 2004 and were conducted at the platoon to company levels. According to a Marine Corps official, eight combined arms exercise rotations were conducted in fiscal year 2004. NBC training was introduced into the third and fourth rotations in January and February, respectively. Rotations five and six concentrated on stability and support operations but did include NBC classroom training. Rotations seven and eight, for reserve units, also received the NBC classroom training but no NBC field exercises. Planned rotations 9 and 10 were canceled. The Marine Corps is introducing a shortened, revised combined arms exercise scenario that is more oriented to current operational requirements. Exercise revisions include an emphasis on small-unit leadership and stability and support operations, which encompass asymmetric and counterinsurgency operations. A Marine Corps official told us that the current design of the revised combined arms exercise scenario does not include NBC training. However, an extensive home-station training period for units precedes attendance at the revised combined arms exercise, and Marine Corps units are required to accomplish NBC training required for their units’ mission- essential tasks. According to the Marine Corps, when it resumes its standard combined arms exercise rotations, units will participate in whatever NBC task training the combined arms exercise scenarios include at that time. For both the Army and the Marine Corps, lessons learned during Operation Iraqi Freedom identified many NBC skill deficiencies that were highlighted earlier by observers/controllers during individual brigade rotations through the Army’s CTCs during fiscal years 2002 and 2003. These continuing deficiencies illustrate the importance of requiring Army and Marine Corps units to establish minimal NBC defense training tasks for units training at their respective CTCs. Problems identified by both the Marine Corps and the Army during this operation included units arriving without appropriate NBC equipment and suits, units arriving without necessary individual and collective NBC skills, units unable to properly set up and operate their NBC detection chemical personnel not included in battlefield decisions, and units unable to properly decontaminate their equipment. Many of these problems were also noted in the Army’s lessons learned reporting from earlier conflicts, including those in the Balkans, Somalia, and Operation Desert Shield/Storm. Establishing minimal NBC tasks for units attending CTCs could provide an opportunity for units’ NBC defense capabilities to be objectively assessed and for CTC observers/controllers to identify units’ NBC equipment shortfalls. This information may aid commanders in decisions on units’ training needs. The Army and the Marine Corps do not always report lessons learned on NBC training at the CTCs in a way that can be used to identify trends over time and allow for cross-unit and cross-center comparisons. Army and Marine Corps regulations and orders strongly encourage after-action reporting for all training exercises, including those that occur at the CTCs. However, Army and Marine Corps after-action reviews of CTC training do not always discuss NBC training and, when they do, the reporting is not standardized to allow for uniform reporting to fully support the identification of NBC trends. Army and Marine Corps regulations and orders state that after-action reports and lessons learned should be prepared to capture the results of training that occurs at the CTCs, but they do not always state that NBC training must be covered in these documents or encourage NBC training results to be presented in a standardized format. As a result, different types of after-action reports and lessons learned are prepared for CTC training, and these documents might or might not mention NBC training. The Army’s regulation that establishes the purpose and objectives of its CTC program states that as part of their mission to provide realistic joint combined arms training, the CTCs will provide the Army and joint participants with feedback to improve warfighting, to increase units’ readiness for deployment and warfighting, and to provide a data source for lessons learned. This regulation also requires that each CTC conduct doctrinally based after-action reviews for each unit that undergoes a rotation at a CTC. The Army regulation on the Army’s lessons learned system requires that these after-action reports be submitted to the Center for Army Lessons Learned (CALL) no later than 120 days from the end of an exercise. Each of the Army’s CTC regulations describes a general format to be used in the after-action reports and lists specific topics to be included. Though the CTC and lessons learned regulations agree on some general points, they differ on what should be covered specifically in after- action reports. For example, the JRTC and CMTC regulations indicate that NBC defense training should be addressed in the training unit commander’s after-action report, but the CTC, NTC, and overall Army lessons learned program regulations do not. Appendix IV includes specific details of how the various Army regulations differ in recommended formats for after-action reports. Like the Army regulations, the Marine Corps order on its lessons learned system states that after-action reports should be prepared for all training exercises. However, the Marine Corps order for the combined arms exercise program at Twentynine Palms does not specify that written after- action reports must be prepared, only that a structured debrief be conducted upon the conclusion of each event or exercise. Though not required by Marine Corps order, Twentynine Palms does prepare a Microsoft PowerPoint (computer software) presentation describing events that took place during the final 3 days of the exercise. For the two rotations in 2004 in which NBC field training was included in the combined arms exercise, NBC training was not included in after-action reporting because it did not occur during the 3 training days covered by the reporting. All Army regulations do not require that NBC training completed at a CTC be discussed in the written after-action reports that are prepared for each training rotation at the three Army CTCs, and thus the reports do not always include information on NBC training. These reports are primarily intended to be feedback for the units being trained to help them assess their own training levels and craft home-station training plans to address identified deficiencies. The after-action reporting and supplementary materials provided to the units that are trained, such as videos of training, are called “take-home packages” and may include as many as five or six compact discs containing Microsoft PowerPoint presentations and summaries of observers/controllers. The structure, format, and content of the after-action reporting vary by center. The NTC typically includes Microsoft PowerPoint briefings and written after-action reports for the units training during each rotation. When subunits of a brigade experience NBC “events,” or NBC training scenarios, during their rotations, observers/controllers generally include a description of the units’ performance in an “NBC executive summary,” which cites areas in which subunits need to improve proficiency, along with specific recommendations for home-station training and citations of applicable NBC-related field manuals. When subunits do not experience NBC events, this section is absent from after-action reporting for the overall unit. Nowhere in the report does the NTC include an overall brigade summary for the entire rotation period of 20 to 25 days that indicates the number of NBC events that occurred during a single rotation, the percentage of subunits that conducted NBC tasks, the type of tasks performed, or how well all individual subunits did. The NTC does include, in many cases, an assessment of a brigade’s NBC skills in its first week of training. Out of the 21 rotations conducted by the NTC during fiscal years 2002 and 2003, take-home packages for 12 brigades contained such scorecards, which assessed units’ ability to perform six essential NBC tasks when they first arrived for training. Like a take-home package for the NTC, a take-home package prepared by the JRTC contains multiple types of documents and after-action reporting. One document lists the types of NBC events planned for the rotation and their timing. When NBC events are not planned for the rotation, this document is absent, and when planned NBC events are canceled, there is no documentation stating that these scenario events did not occur or why. Neither is there a document that contains an overall summary of how many NBC events occurred during a single rotation, the percentage of subunits performing NBC tasks and the type of tasks performed, or how well all individual subunits did. Unlike the NTC, the JRTC includes no “scorecard” for assessing units’ ability to perform basic NBC tasks. When subunits do experience NBC events, JRTC observers/controllers cite areas in which subunits need to “sustain” or “improve” proficiency, along with specific recommendations for home-station training and citations of applicable NBC field manuals. A CMTC take-home package also contains multiple Microsoft PowerPoint briefings, written after-action reporting, and videotapes. However, a package might or might not mention NBC training that occurred during a rotation, as this is not a mandatory reporting section. When a subunit experiences an NBC event, an observer/controller may mention how the unit performed if the subunit’s performance was considered to be notable. When NBC events are discussed in an after-action report, CMTC, like NTC and JRTC, includes general observations of a unit’s performance, comments on what it did well, and recommendations for improvement. Because there is no overall NBC summary document, however, CMTC’s take-home packages seldom provide information on how many NBC events occurred during a rotation, what these events were, what percentage of the overall rotating unit participated, and how well they did on particular NBC tasks. Because no NBC section is required, it is not possible to calculate what percentage of CMTC rotations experience NBC events. Twice a year, CALL publishes “trends” documents for each Army CTC. These publications cover all rotations that occurred during a 6-month period and expunge any information from the reporting that would identify a particular unit. The trends documents are compiled from after- action reports prepared for CTC training. They are prepared by observers/controllers and given to CALL representatives at each CTC, who then forward these reports to CALL analysts at Fort Leavenworth, Kansas. When NBC training is determined to reveal a “trend” to report, it is included in the trends publications. Individual take-home packages and after-action reports that identify particular units are not generally made available. Rather, they are protected to prevent them from becoming public “report cards.” CALL is now limited in its ability to identify NBC trends in its trends reports because NBC training completed at CTCs is not now uniformly reported in a standardized format that can reliably provide comparable data to support the identification of NBC trends. The Army has a large portion of CTC after-action reports located in a database at CALL. However, because each CTC sends different or no information on NBC training, CALL does not have information available that would make it possible to do cross-unit or cross-center comparisons. CALL also stores compact discs and videotapes, some of which are entered into the electronic database. The CALL representative at each CTC maintains some portions of the take-home packages on site. However, at least in part because the take-home packages are considered the property of the units being trained, they are not made widely available. Also, many of the Army’s after-action reports for NBC training at the CTCs for fiscal years 2002 to 2003 were not received, not locatable, or never loaded into the database located at CALL for archiving and subsequent research. We found during our visit to CALL that its researchers were very skilled in performing database analysis, but they were limited by incomplete and nonstandard reporting for NBC training data. The Marine Corps’ written after-action reporting system does not address NBC training conducted in the combined arms exercise primarily because NBC training has not been included in that training. At Twentynine Palms, a final written exercise report containing lessons learned is prepared by the Marine Air Ground Task Force Training Command for the last 3 days of the combined arms exercise. However, the command does not prepare written after-action reports for the other major segments of the exercise. After-action feedback is primarily given orally throughout the exercise period. This oral feedback is based on observations by observers/controllers assigned to each unit being trained. In the combined arms exercises that included NBC training in 2004, the written final exercise reports did not include any lessons learned on NBC operations because this training did not occur during the final 3 days. At that time, NBC exercise scenarios had not been fully integrated into the combined arms exercises. The Marine Corps has no formal evaluation requirements for the combined arms exercise. The applicable Marine Corps order states that “A structured debrief will be conducted upon conclusion of each event or exercise.” A Microsoft PowerPoint briefing on the final 3 days of the exercise does identify training objectives that the participant forces used to guide them through their training exercises, and in a sample briefing we reviewed, we found an assessment of the unit’s performance for each training objective. However, NBC operations were not identified as a training objective, and the briefing included no lessons learned or recommendations for NBC defense training. NBC content is being added to the standard combined arms exercise scenario. However, the standard combined arms exercises have recently been replaced by revised combined arms exercises oriented toward current operations, and the revised combined arms exercise scenarios for Twentynine Palms contain no NBC defense training. In addition, the Marine Corps has not been archiving at any central location its reporting on any unit training—NBC or otherwise—completed at Twentynine Palms or submitting related training issues to its lessons learned system. Therefore, no after-action reports on the combined arms exercises that occur at Twentynine Palms are being placed into the Marine Corps Lessons Learned System’s database. The Marine Corps recently determined that its overall lessons learned system was not functioning well. In December 2003, a working group that studied the Marine Corps Lessons Learned System found that problems with reporting and maintaining lessons learned were Marine Corps-wide. A Marine Corps information paper reported that throughout the Marine Corps, only eight reports had been submitted to the Marine Corps Lessons Learned System in 2002. The information paper also stated that the Marine Corps plans to implement an improved Web-based lessons learned system in the future. It also plans to establish a permanent organization to collect, review, and maintain this improved lessons learned system. Separately, an Enduring Freedom Combat Assessment Team was formed in 2001 to collect lessons learned in Afghanistan. In 2003, the team was restructured to support Operation Iraqi Freedom. The CTCs represent a rare opportunity for Army and Marine Corps units to perform advanced training under conditions that are designed to approximate actual combat as closely as possible, thereby enabling units to assess and build upon skills learned at home stations. The services stress the importance of including NBC defense training in their exercises. Yet only a small percentage of the servicemembers passing through the CTCs encounter NBC defense training tasks because an Army or Marine Corps regulation or order requiring it is lacking. We recognize that commanders’ discretion in determining unit training plans for CTC rotations is, and should continue to be, a central part of Army and Marine Corps training doctrine. However, until units are required to perform at least minimum NBC tasks while attending the CTCs, the services will continue to risk missing a unique opportunity to (1) uniformly assess these units’ proficiency while they are operating in a field environment and (2) leverage the benefits of an objective assessment by an expert staff of units’ NBC skills. NBC lessons learned during training rotations at the combat training centers would be very useful for the services in their attempts to anticipate and train for NBC problems that may occur later during operations. Service regulations or orders specify that (1) all units at CTCs should conduct doctrinally based after-action reviews of events supported by observers/controllers, (2) lessons learned should be entered into an archived database, and (3) training unit commanders’ after-action reports should be analyzed for trends and lessons learned. However, service regulations or orders do not now state that NBC training at the CTCs must be captured in a standardized format. In the absence of such a requirement, the Army’s archived NBC data on training at the CTCs will remain incomplete or noncomparable and thus will not fully support research and reporting on NBC trends and lessons learned. The Marine Corps also does not employ a standard method of reporting NBC training at Twentynine Palms or providing the Marine Corps’ trend and lessons learned reporting systems with NBC training information. Until the Marine Corps standardizes the reporting formats to capture service-defined NBC training at Twentynine Palms, it will be unable to analyze, over time, the units’ NBC skills at these exercises, the effectiveness of NBC training at Twentynine Palms, or NBC trends and lessons learned. Overall, improvements to collecting, archiving, and using NBC training data could help the services capitalize on their substantial investment in maintaining CTCs and in sending units to train there, as well as to monitor the quality of NBC training and units’ NBC skill levels. To ensure that the NBC training opportunities offered to Army and Marine Corps units from training at their combat training centers are maximized and that NBC lessons learned at these centers are uniformly recorded and archived, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following two actions: Establish the minimum NBC tasks for units attending training exercises at CTCs. Standardize reporting formats to capture NBC training that occurs at the CTCs. We also recommend that the Secretary of Defense direct the Secretary of the Navy to direct the Commandant of the Marine Corps to take the following two actions: Establish the minimum NBC tasks for units attending the combined arms exercise at Twentynine Palms. Standardize reporting formats to capture NBC training that occurs during a combined arms exercise at Twentynine Palms. In written comments, DOD stated that it agreed with the findings and recommendations of the report and that the Army and Marine Corps have established programs to implement the recommendations. Army and Marine Corps officials indicated that they are currently taking those actions necessary to develop the NBC content to be included in future CTC rotations and modify their after-action reporting systems and regulations to ensure that NBC training completed at CTCs is appropriately reported. However, because of current operational requirements, full implementation of NBC training at CTCs will be delayed. DOD’s comments are printed in their entirety in appendix V. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Defense, the Army, and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or e-mail me at [email protected]. Additional contact and staff acknowledgments are listed in appendix VI. To determine the extent to which Army and Marine Corps units participate in nuclear, biological, and chemical (NBC) training at the combat training centers (CTC) and the extent to which these units and personnel perform NBC tasks at the centers to service standards, we interviewed appropriate officials and reviewed pertinent documents and after-action reports at the following locations: Office of the Department of the Army, Deputy Chief of Staff, G-3, Center for Army Lessons Learned, Battle Command Training Program, Combined Arms Center-Training, Combined Arms Research Library, Fort Leavenworth, Kansas; U.S. Army Chemical School, Maneuver Support Center, and the Army Maneuver Support Center Academic Library, Fort Leonard Wood, Missouri; Training Division, Headquarters, U.S. Forces Command, Fort Office of the Deputy Chief of Staff for Operations and Training, Training and Doctrine Command, Fort Monroe, Virginia; Army National Training Center, Fort Irwin, California; Army Joint Readiness Training Center, Fort Polk, Louisiana; Army Combat Maneuver Training Center, Hohenfels, Germany; Marine Corps Combat Development Command, Quantico, Virginia; and Marine Air Ground Combat Center, Twentynine Palms, California. To compile a collection of planning documents and after-action reports for the Army CTC rotations that occurred during fiscal years 2002 and 2003, we visited and obtained documents from various locations. The largest collection of planning documents and after-action reports was located at the Center for Army Lessons Learned (CALL), though we also obtained some documents from other locations, including the CTCs. We were able to obtain at least some parts of after-action reporting for 41 of the 57 rotations that occurred at the National Training Center (NTC), the Joint Readiness Training Center (JRTC), and the Combat Maneuver Training Center (CMTC) in fiscal years 2002 and 2003. The following organizations provided us with the planning documents and after-action reports for units attending the CTCs: Center for Army Lessons Learned, Fort Leavenworth, Kansas; National Training Center, Fort Irwin, California; Joint Readiness Training Center, Fort Polk, Louisiana; and Combat Maneuver Training Center, Hohenfels, Germany. To determine the extent of NBC training completed at the Army CTCs during fiscal years 2002 and 2003, we analyzed all available planning and after-action reports. As mentioned in our report, we found that NBC training that occurred was not always discussed in after-action reports; that subunits of an entire brigade experienced chemical or biological events that did not affect the overall brigade; and that observers/controllers frequently noted deficiencies in units’ basic NBC skills, often attributing them to inadequate home-station training. Because the CTCs’ formatting of NBC reporting differed and none contained an overall summary document of all the NBC training that occurred during a single rotation, we were not able to definitively determine whether we had been able to collect all pertinent documents, though we did examine all of the reporting that the CTCs and CALL said was available. The Marine Corps provided us with only two after-action reports for combined arms exercises at Twentynine Palms. It told us that there was no central repository for these after-action reports and that only two reports were located. However, because NBC training had not been introduced to the combined arms exercise until January 2004 and was suspended thereafter, we were able to determine that no after-action reports on NBC training would have been submitted. The one after-action report that the Marine Corps provided us with, for the January 2004 combined arms rotation, did not mention NBC training because this training did not occur during the last 3 days of the exercise—the only time period captured in the after-action report. To determine whether the Army and the Marine Corps report NBC training at the CTCs in a standardized format that allows the services to identify trends and lessons learned and to do cross-unit and cross-center comparisons, we collected all available after-action reports from the above-listed locations. These reports were all part of the after-action reporting contained in “take-home packages”—that is, the materials prepared for the units to take with them to document training completed and to aid in units’ development of home-station training plans. Because these reports contained particular names of units and comments on unit performance, they are not made generally available, which required us to obtain these reports from lessons learned repositories and the CTCs. We also compared these reports with general trends documents prepared by the Army and the Marine Corps, which expunge units’ identification and summarize the results of groups of rotations, and learned that not all NBC training at CTCs was reported because of the lack of standardized reporting formats. We conducted our review from March 2003 through October 2004 in accordance with generally accepted government auditing standards. Both the Army and the Marine Corps have defined in various publications what they believe are the essential nuclear, biological, and chemical (NBC) skills that all soldiers and Marines should have. In no case, however, do service regulations or orders prescribe where the training must take place. Specifically, applicable documents do not state that any particular NBC tasks must be included in training that units receive while they are at the Combat Training Centers (CTCs), but they do state that NBC training should be incorporated into all types of exercises. The services’ guidance and policy have left it to the discretion of commanders to determine where their units should train in the required NBC skills. The following is a listing of Army and Marine Corps definitions of essential NBC skills. In Field Manual 7-15, The Army Universal Task List, the Army provides a common, doctrinal foundation and catalog of the Army’s tactical missions, operations, and collective tasks. A commander can use this list as a menu in developing the unit’s mission-essential task list. The NBC tasks cited in the Army’s Universal Task List are take measures to avoid or minimize the effects of NBC attacks and reduce the effects of NBC hazards, identify NBC hazards, warn personnel/units of contaminated areas, report NBC hazards throughout the area of operations, use individual/collective NBC protective equipment, perform immediate decontamination, perform operational decontamination, perform thorough decontamination, perform area decontamination, and perform patient decontamination. The tasks listed by U.S. Army Forces Command, which are all skill level-1 NBC survival-oriented tasks, are protect yourself from chemical and biological injury/contamination using your M40-series protective mask with hood, replace the canister on your M40-series protective mask, maintain your M40-series protective mask with hood, react to chemical or biological hazard/attack, protect yourself from NBC injury/contamination with chemical protective equipment, identify chemical agents using M8 detector paper, protect yourself from NBC injury/contamination when drinking from your canteen while wearing your protective mask, administer first aid to a nerve agent casualty, administer nerve agent antidote to self (self-aid), decontaminate your skin using the M291 skin decontaminating kit, decontaminate your skin and personal equipment using an M258A1 decontaminate your individual equipment using the M295 individual equipment decontamination kit. In addition, the Army requires that “Units will conduct weapons qualification on individual and crew-served weapons with personnel wearing protective equipment.” The Marine Corps lists NBC “survival standards” for each individual in Marine Corps Order 3400.3F, “Nuclear, Biological, and Chemical Defense Training.” They are as follows: 1. Identify North Atlantic Treaty Organization NBC markers. 2. Properly maintain Individual Protective Equipment. 3. Properly don, clear, and check their field protective mask within 9 seconds of an NBC alarm or attack. 4. Properly don the appropriate individual protective clothing and assigned field protective mask to Mission-Oriented Protective Posture Level 4. 5. Perform basic functions (e.g., drinking, waste removal, sleep) while in Mission-Oriented Protective Posture Level 4. 6. Perform NBC detection measures with issued detection equipment, i.e., M256A1 Chemical Detection Kit, M8 detection paper, M9 detection tape, and DT 236 radiac detector. 7. Decontaminate skin and personal equipment using M291 skin decontamination kit or other appropriate decontaminants. 8. Perform individual (emergency) Mission-Oriented Protective Posture equipment exchange. 9. React to a nuclear attack. 10. React to a chemical attack. 11. React to a biological attack. 12. Take the specific actions required to operate efficiently before, during, and after NBC attacks to reduce the effects of NBC contamination. 13. Recognize or detect chemical agent contamination and perform immediate decontamination techniques: e.g., person, weapon, clothing, equipment, position, vehicle, and crew-served weapons. 14. Treat a chemical agent casualty. 15. Be able to drink water from a canteen or other water container while masked. 16. Be able to properly format and send an NBC 1 report. The Marine Corps lists NBC “basic operating standards” for units in Marine Corps Order 3400.3F, “Nuclear, Biological, and Chemical Defense Training.” They are as follows: The unit will maintain its collective nuclear, biological, and chemical defense equipment in a high state of serviceability at all times. The unit must be proficient in taking the specific actions required to operate efficiently before, during, and after NBC attacks to reduce the effects of NBC contamination. The unit must be able to recognize or detect chemical agent decontamination and perform immediate individual and operational decontamination techniques: e.g., person, weapon, clothing, equipment, position, vehicle, and crew-served weapons. The unit must demonstrate proficiency in contamination avoidance procedures when crossing NBC-contaminated areas. The unit must demonstrate proficiency in performing primary military duties, to include the use of crew/personal weapons and minimum/basic combat skills, while wearing Individual Protective Equipment for extended periods. The unit must demonstrate proficiency in operational and thorough decontamination procedures. The unit must demonstrate proficiency in the principles of collective protection, including passage through contamination control areas, where applicable. The unit must demonstrate proficiency in the use of dosimetric devices; chemical and biological detection; and monitoring equipment, where applicable. The unit must be able to send and receive NBC-1 reports and plot NBC- 3 reports. The unit must be able to properly conduct monitor/survey missions as directed by higher headquarters personnel. The unit must be able to conduct unmasking procedures. Command Brief (1 hour): All NBC personnel will receive this instruction as a one-time prerequisite to nuclear, biological, and chemical defense instruction. Vulnerability Analysis (2 hours): Students learn to source, develop, and contribute to unit intelligence preparation of the battlefield; conduct hazard assessments; and finally develop and recommend courses of action from NBC. Control Center (Nuclear) (3 hours): Students rehearse the use of the NBC warning and reporting procedures for nuclear detonations. Includes manual plotting methods, communication protocols, and operational aspects. Time of stay/exit, shielding, and decay problems are illustrated. Control Center (Chem-Bio) (2 hours): This course instructs and rehearses the student in the use of the NBC warning and reporting procedures for chemical and biological attack. Includes manual plotting methods, communication protocols, and operational aspects. Incident response through consequence management. Joint Warning and Reporting Network (3 hours): This is the prescribed automated platform for integration of NBC warning and reporting to command and control systems and networks. Radiation Safety/Depleted Uranium (1 hour): Designed to be refresher instruction for the unit. Addresses types and characteristics of ionizing radiation, medical effects, and protection standards/tasks. Reviews the current inventory of radioactive sources in the Department of Defense’s use and the handling of accidents. Unit Sustainment (3 hours): Formerly referred to as “decontamination,” sustainment is the units’ effort to recover personnel and equipment for continued use on the battlefield. This period of instruction develops the principles of decontamination and updates the NBC specialist/officer on the latest equipment and decontaminants. Special Sustainment (1 hour): The special requirements for decontamination of casualties and aircraft are instructed per current doctrine. Instruction covers site reconnaissance and the development of best practices in areas that every unit may not encounter. Biodefense and Medical Management (2 hours): Designed for both NBC and medical personnel, the lecture covers casualty identification, triage, and decontamination requirements. Part 2 of this instruction highlights the biological sampling and modeling of the battlefield; how to collect, escort, and ship etiologic agents; laboratory protocols; reporting requirements; and fundamentals of epidemiology. Joint Mission Essential Task List (1 hour): Class begins with a review of mission- essential task development and NBC tasks at the strategic national, strategic theater, and operational levels. Lecture then details the Marine Corps’ Task List and the seven mission-essential task areas for the Marine Air Ground Task Force, focusing on sense, shape, shield, and sustain. Puts Marine Air Ground Task Force requirements into perspective and sets the stage for joint and combined operations. Forces Command Regulation 350- 50-1 (NTC Training) Forces Command Regulation 350-50-2 (JRTC Training) After Action Report, Part I: Executive overview. (No corresponding required report section.) (No corresponding required report section.) Mission objectives. General description. Participating units (including specific information) such as troop list, number of personnel who participated, and number and type of vehicles used. (must coincide with the current modification table of organization and equipment, broken down by vehicle type, unit requirement, and unit shipped). (broken down by vehicle type, unit requirement, and unit shipped). Significant issues. Limitations. Funding (including personnel, transportation type and cost, total vehicle transportation cost, and total cost reimbursed to the Combat Maneuver Training Center ). After Action Report, Part II: Lessons Learned. (No corresponding required report section.) (No corresponding required report section.) Observation. Discussion. Lessons learned. Recommended action. Comments. U.S. Army, Europe, Regulation 350-50 (CMTC Training) Forces Command Regulation 350- 50-1 (NTC Training) Forces Command Regulation 350-50-2 (JRTC Training) (No corresponding required report section.) (No corresponding required report section.) (No corresponding required report section.) Tactical lessons learned, to include command and control; maneuver (offense/defense); fire support; intelligence; air defense; mobility/countermobility; electronic warfare; nuclear, biological, chemical defense; and combat service support. Tactical lessons learned. Address the Battle Functions. Administrative lessons learned, including deployment, redeployment, equipment draw, and regeneration. Tactical lessons learned. Address the battlefield operating system; nuclear, biological, and chemical defense; electronic warfare; deployment; and any other pertinent topics. Benefits of training at the National Training Center (NTC). Administrative lessons learned, including deployment, redeployment, and any other pertinent topics. Administrative and logistics lessons learned (including deploying to, training at, and redeploying from the CMTC). improvement. Benefits of training at the Joint Readiness Training Center (JRTC). General narrative comments, to include the following: improving existing doctrine. Benefits of training at the learned on preparatory training, including comments on usability of the Army Training and Evaluation Program or any other training and training support product developed by the Training and Doctrine Command (TRADOC). doctrinal improvement. improving preparatory training, including comments on the usability of TRADOC publications or other training support products. lessons learned on preparatory training. improvement of the NTC experience. Recommendations for improving the training exercise. Management lessons learned. improving the JRTC experience. In addition to the contact named above, Beverly Schladt, Mike Avenick, Matthew Sakrekoff, James Lawson, Leslie Bharadwaja, Gerald Winterlin, Jim Melton, R.K. Wild, Dave Mayfield, and Jay Smale made key contributions to this report. | The Department of Defense (DOD) believes that it is increasingly likely that an adversary will use nuclear, biological, or chemical (NBC) weapons against U.S. forces. Consequently, DOD doctrine calls for U.S. forces to be sufficiently trained to continue their missions in an NBC-contaminated environment. Given longstanding concerns about the preparedness of DOD's servicemembers in this critical area, GAO has undertaken a body of work covering NBC protective equipment and training. For this review, GAO was asked to determine the following: (1) To what extent do Army and Marine Corps units and personnel attending combat training centers participate in NBC training, and to what extent do these units and personnel perform NBC tasks at the centers to service standards? (2) Do the Army and the Marine Corps report NBC training at the centers in a standardized format that allows the services to identify lessons learned and to do cross-unit and cross-center comparisons? Army and Marine Corps combat training centers provide a unique opportunity for units to perform advanced training under conditions that approximate actual combat, thereby enabling units to assess and build upon skills learned at home stations. Although DOD and both services have stressed the importance of including NBC defense in all types of training, they have not established minimum NBC-related tasks for units attending the centers. Commanders sometimes reduce NBC training to focus on other priority areas. As a result, the extent of NBC training actually conducted at these centers varies widely, and some units receive little or none at all. For example, officials at two Army training centers estimated that during fiscal years 2002 and 2003, a typical unit training rotation for a brigade-sized unit--which may include up to 4,000 soldiers--experienced NBC events that required only about 5 percent of these troops to train in full NBC protective clothing for a total of 18 hours or more. For the Marine Corps, no NBC training was conducted during combined arms exercises at its training center for at least 5 years prior to January 2004. The Marine Corps began to introduce NBC training into its combined arms exercises in two rotations that occurred in January and February 2004 but suspended it because of other priorities related to preparing units for ongoing operations. Without minimum NBC tasks, the services often miss the opportunity to use the centers' unique environment to improve units' proficiency in NBC defense. When Army units did undergo NBC training, observers noted that many units did not perform basic NBC tasks to Army standards. For example, during fiscal years 2002 and 2003, most brigades attending one center did not meet standards for basic NBC tasks such as donning protective gear, seeking overhead shelter, and conducting unmasking procedures. Observers at the Army centers often cited inadequate home-station training as the reason units were not performing basic NBC tasks to standards. Skills in these basic tasks are normally acquired during training at home stations and lay the foundation for acquiring more complex skills associated with large-unit NBC training. When units arrive at the centers with inadequate basic NBC skills, they may not be able to take full advantage of the unique and more complex large-unit NBC training opportunities offered at these centers. The Army and the Marine Corps do not always report lessons learned on NBC training at the centers in a way that can be used to identify trends over time and allow for cross-unit and cross-center comparisons. Army and Marine Corps doctrine stresses the importance of identifying lessons learned during training to enable tailored training at home stations and elsewhere to reduce the likelihood that similar problems will occur during operations. Because service guidance does not require standardized reporting formats, the training centers submit different types of after-action reports that might or might not mention NBC training. This lack of standardized reporting represents opportunities lost to the services to collect comparable data to identify NBC training trends and lessons learned. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
U. S. critical infrastructure is made of systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on the nation’s security, national economic security, national public health or safety, or any combination of these matters. Critical infrastructure includes, among other things, banking and financing institutions, telecommunications networks, and energy production and transmission facilities, most of which are owned and operated by the private sector. Sector-specific agencies (SSA) are federal departments or agencies with responsibility for providing institutional knowledge and specialized expertise as well as leading, facilitating, or supporting the security and resilience programs and associated activities of its designated critical infrastructure sector in the all-hazards environment. Threats to systems supporting critical infrastructure are evolving and growing. Cyber threats can be unintentional or intentional. Unintentional or non-adversarial threats include equipment failures, software coding errors, and the actions of poorly trained employees. They also include natural disasters and failures of critical infrastructure on which the organization depends but are outside of its control. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. These threat adversaries vary in terms of the capabilities of the actors, their willingness to act, and their motives, which can include seeking monetary gain or seeking an economic, political, or military advantage. Table 1 describes the sources of cyber-based threats in more detail. Cyber threat adversaries make use of various techniques, tactics, and practices, or exploits, to adversely affect an organization’s computers, software, or networks, or to intercept or steal valuable or sensitive information. These exploits are carried out through various conduits, including websites, e-mail, wireless and cellular communications, Internet protocols, portable media, and social media. Further, adversaries can leverage common computer software programs, such as Adobe Acrobat and Microsoft Office, to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. Table 2 provides descriptions of common exploits or techniques, tactics, and practices used by cyber adversaries. Reports of cyber exploits illustrate the debilitating effects such attacks can have on the nation’s security, economy, and on public health and safety. In May 2015, media sources reported that data belonging to 1.1 million health insurance customers in the Washington, D.C., area were stolen in a cyber attack on a private insurance company. Attackers accessed a database containing names, birth dates, e-mail addresses, and subscriber ID numbers of customers. In December 2014, the Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) issued an updated alert on a sophisticated malware campaign compromising numerous industrial control system environments. Their analysis indicated that this campaign had been ongoing since at least 2011. In the January 2014 to April 2014 release of its Monitor Report, ICS- CERT reported that a public utility had been compromised when a sophisticated threat actor gained unauthorized access to its control system network through a vulnerable remote access capability configured on the system. The incident highlighted the need to evaluate security controls employed at the perimeter and ensure that potential intrusion vectors are configured with appropriate security controls, monitoring, and detection capabilities. Federal policy and public-private plans establish roles and responsibilities for federal agencies working with the private sector and other entities to enhance the cyber and physical security of public and private critical infrastructures. These include PPD-21 and the NIPP. PPD-21 shifted the nation’s focus from protecting critical infrastructure against terrorism toward protecting and securing critical infrastructure and increasing its resilience against all hazards, including natural disasters, terrorism, and cyber incidents. The directive identified 16 critical infrastructure sectors and designated associated federal SSAs. Table 3 shows the 16 critical infrastructure sectors and the SSA for each sector. PPD-21 identified SSA roles and responsibilities to include collaborating with critical infrastructure owners and operators; independent regulatory agencies, where appropriate; and with state, local, tribal, and territorial entities as appropriate; serving as a day-to-day federal interface for the prioritization and coordination of sector-specific activities; carrying out incident management responsibilities consistent with statutory authority and other appropriate policies, directives, or regulations; and providing, supporting, or facilitating technical assistance and consultations for their respective sector to identify vulnerabilities and help mitigate incidents, as appropriate. The NIPP is to provide the overarching approach for integrating the nation’s critical infrastructure protection and resilience activities into a single national effort. DHS developed the NIPP in collaboration with public and private sector owners and operators and federal and nonfederal government representatives, including sector-specific agencies, from the critical infrastructure community. It details DHS’s roles and responsibilities in protecting the nation’s critical infrastructures and how sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. It emphasizes the importance of collaboration, partnering, and voluntary information sharing among DHS and industry owners and operators, and state, local, and tribal governments. The NIPP also stresses a partnership approach among the federal and state governments and industry stakeholders for developing, implementing, and maintaining a coordinated national effort to manage the risks to critical infrastructure and work toward enhancing physical and cyber resilience and security. According to the NIPP, SSAs are to work with their private sector counterparts to understand cyber risk and develop sector-specific plans that address the security of the sector’s cyber and other assets and functions. The SSAs and their private sector partners are to update their sector-specific plans based on DHS guidance to the sectors. The currently available sector-specific plans were released in 2010 to support the 2009 version of the NIPP. In response to the most recent NIPP, released in December 2013, DHS issued guidance in August 2014 directing the SSAs, in coordination with their sector stakeholders, to update their sector-specific plans. The SSAs are also to review and modify existing and future sector efforts to ensure that cyber concerns are fully integrated into sector security activities. In addition, the NIPP sets up a framework for sharing information across and between federal and nonfederal stakeholders within each sector that includes the establishment of sector coordinating councils and government coordinating councils. Sector coordinating councils are to serve as a voice for the sector and a principal entry point for the government to collaborate with the sector for critical infrastructure security and resilience activities. The government coordinating councils enable interagency, intergovernmental, and cross-jurisdictional coordination within and across sectors. Each government coordinating council is chaired by a representative from the designated SSA with responsibility for providing cross-sector coordination. The NIPP also recommended several activities—referred to as Call to Action steps— to guide the efforts of the SSAs and their sector partners to advance security and resilience under three broad activity categories: building on partnership efforts; innovating in risk management; and focusing on outcomes. Table 4 shows the 10 Call to Action Steps determined to have a cybersecurity-related nexus. The NIPP states that all of the identified steps, including these 10 actions with a greater relationship to enhancing cybersecurity, are not intended to be exhaustive or implemented in every sector. Rather, they are to provide strategic direction, allow for differing priorities in each sector, and enable continuous improvement of security and resilience efforts. In addition, Executive Order 13636 was issued to, among other things, address the need to improve cybersecurity through information sharing and collaboratively developing and implementing risk-based standards. It called for the SSAs to, among other things, establish, in coordination with DHS, a voluntary program to support the adoption of the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework) by owners and operators of critical infrastructure and any other interested entities; create incentives to encourage owners and operators of critical infrastructure to participate in the voluntary program; and, if necessary, develop implementation guidance or supplemental materials to address sector-specific risks and operating environments. Sector-specific agencies determined the significance of cyber risk to the networks and industrial control systems for all 15 of the sectors in the scope of our review. Specifically, they determined that cyber risk was significant for 11 of 15 sectors. For the remaining 4 sectors, the SSAs had determined that cyber risks were not significant due to the lack of cyber dependence in the sector’s operations, among other reasons. These determinations were carried out in response to the 2009 NIPP, which directed the SSAs to consider how cyber would be prioritized among their sectors’ critical infrastructure and key resources as part of the sector- specific planning process. The SSAs and their sector stakeholders were to include an overview of current and emerging sector risks including those affecting cyber when preparing their 2010 plans. Table 5 shows the significance of cyber risk to each sector, as determined by the SSAs, as well as when these determinations were made. Since most of these determinations were made for the 2010 sector- specific planning process, they may not reflect the current risk environment of the sectors. In particular, SSAs for the 4 sectors that had not determined cyber risks to be significant during their 2010 sector- specific planning process subsequently reconsidered the significance of cyber risks to their sectors. Also, in response to the 2013 NIPP, DHS issued guidance for developing updated sector-specific plans for 2015. According to this guidance and SSA officials, SSAs are to document how they have reconsidered the significance of cyber risks to their sectors. DHS officials stated that the SSAs have drafted their updated sector- specific plans and submitted them to DHS for review; however, the plans have not yet been finalized and released. Based on the 2010 sector-specific plans and subsequent documents and activities, the SSAs’ determinations of the significance of cyber risk to their 15 respective sectors are summarized below. DHS, in collaboration with chemical sector stakeholders, determined that cyber risk was a significant priority for the sector. In 2009, DHS and the chemical sector coordinating council issued the Roadmap to Secure Controls Systems in the Chemical Sector, which documented the cybersecurity concerns for chemical facilities’ industrial control systems and the need to develop cyber risk mitigation actions to be addressed over a 10-year period. In addition, the 2010 Chemical Sector-Specific Plan highlighted the importance of cyber systems to the sector and promoted the need for owners and operators of sector assets to apply risk assessment and management methodologies to identify cyber threats to their individual operations. DHS did not consider cyber risks to be significant for the commercial facilities sector. The commercial facilities sector’s 2010 sector-specific plan does not identify cyber risks as significant to the sector. DHS officials stated that the decision was based on the sector’s diversity of components and the manner in which cyber-related technology is employed. According to these officials, a cyber event affecting one facility’s cyber systems (e.g., access control or environmental systems) would not be likely to affect the cyber assets of other facilities within the sector. However, in July 2015, DHS officials stated that, as part of the updated sector planning process, they had recognized cyber risk as a high-priority concern for the sector. In particular, they noted that the sector uses Internet-connected systems for processes like ticketing and reservations, so a large-scale communications failure or cyber attack could disrupt the sector’s operations. DHS, in collaboration with communications sector stakeholders, completed a risk assessment in 2012 for the communications sector that identified cyber risk as a significant priority; however, the assessment noted that due to the sector’s diversity and level of resiliency, most of the threats would only result in local or regional communications disruptions or outages. The assessment evaluated cyber threats such as malicious and non-malicious actors committing alterations or intrusions that could pose local, regional, or national level risks to broadcasting, cable, satellite, wireless, and wireline communications networks. The risk assessment also concluded that malicious actors could use the communications sector to attack other sectors. DHS did not consider cyber risk to be significant for the critical manufacturing sector. The sector’s 2010 sector-specific plan stated that many critical manufacturing owners and operators from this diverse and dispersed sector had completed asset, system, or network-specific assessments on their own initiative. Also, the plan identified cyber elements that support the sector’s functional areas, including electronic systems for processing the information necessary for management and operation or for automatic control of physical processes in manufacturing. This applied primarily to the production of metals, machinery, electrical equipment, and heavy equipment. However, the critical manufacturing sector relies upon other sectors such as communications and information technology where addressing cyber risk is a priority. DHS officials stated that, since 2010, they have identified sector critical cyber functions and services, and the sector’s draft 2015 sector-specific plan notes this as a step toward conducting a sector-wide cyber risk assessment. DHS officials considered cyber risks for the dams sector and acknowledged that cyber threats could have negative consequences; however, they determined cyber risks to not be significant for the sector. Specifically, the sector’s 2010 sector-specific plan concluded that the sector’s cyber environment and its legacy industrial control systems were designed to operate in a fairly isolated environment using proprietary software, hardware, and communications technology and, as a result, were designed with cybersecurity as a low priority. However, the officials stated that vulnerabilities in industrial control systems pose cyber-related risks to the sector’s operations. In the sector-specific plan, they acknowledged that the evolution of industrial control systems to incorporate network-based and Internet Protocol-addressable features and more commercially available technologies could introduce many of the same vulnerabilities that exist in current networked information systems. DHS officials also stated that they are addressing cybersecurity for the sector with their update to the sector-specific plan and the sector’s roadmap for securing control systems, as well as with the development of a capability maturity model specifically for the dams sector. At the time of our review, the updated sector-specific plan was still in draft. The Department of Defense (DOD) determined that cyber threats to contractors’ unclassified information systems represented an unacceptable risk of compromise to DOD information and posed a significant risk to U.S. national security and economic security interests. In the sector’s 2010 sector-specific plan, DOD, in collaboration with its sector partners, listed cybersecurity and managing risk to information among its five goals for the sector’s protection and resilience. In addition, DOD has issued annual “for official use only” reports on its progress defending DOD and the defense industrial base against cyber events for fiscal years 2010 through 2014. The reports identify definitions and categories of cyber events, exploited vulnerabilities, and adversary intrusion methods based on data from several key DOD organizations with cybersecurity responsibilities and other intelligence sources. The reports are to provide an annual update of cyber threats, threat sources, and vulnerability trends affecting the defense industrial base. DHS officials, in collaboration with sector stakeholders, concluded that cyber threats could have a significant impact on the emergency services sector’s operations. The risk assessment process brought together subject matter experts to perform an assessment of cyber risks across six emergency services sector disciplines: law enforcement, fire and emergency services, emergency medical services, emergency management, public works, and public safety communications and coordination/fusion. They developed cyber risk scenarios across multiple sector disciplines and applied DHS’s Cybersecurity Assessment and Risk Management Approach methodology to reach their conclusion. The results were reported in 2012 in the Emergency Services Sector Cyber Risk Assessment. In a previous GAO review of cybersecurity in the emergency services sector, we reported that sector planning activities, including the cyber risk assessment, did not address the more interconnected, Internet-based emerging technologies becoming more prevalent in the emergency services sector. As a result, the sector could be vulnerable to cyber risks in the future without more comprehensive planning. We recommended that the Secretary of Homeland Security collaborate with emergency services sector stakeholders to address the cybersecurity implications of implementing technology initiatives in related plans. DHS agreed with our recommendation and stated that the updated sector-specific plan will include consideration of the sector’s emerging technology. At the time of our review, the updated sector-specific plan was still in draft. The Department of Energy (DOE) identified cyber risks as significant and a priority for the energy sector. Specifically, in the sector’s 2010 sector- specific plan, DOE, in collaboration with its sector stakeholders, included cybersecurity among the sector’s goals to enhance preparedness, security, and resilience. DOE officials stressed that their risk management approach focuses on resilience, especially in the context of ensuring the resilience of the electric grid. In addition, the 2011 Roadmap to Achieve Energy Delivery System Cybersecurity, developed by energy sector stakeholders, including responsible DOE officials, recognized the continually evolving cyber threats and vulnerabilities and provided a framework for energy sector stakeholders to survive a cyber incident while sustaining critical functions. Treasury, in collaboration with sector stakeholders, identified cyber risk as significant to the financial services sector. Specifically, the 2010 financial services sector-specific plan stated that all of the sector’s services rely on its cyber infrastructure, which necessitates that cybersecurity be factored into all of the sector’s critical infrastructure protection activities. In addition, as a highly regulated sector, the financial services sector has been required to undergo risk assessments by financial regulators to satisfy regulatory requirements. In July 2015, Treasury officials stated that they leveraged the collective body of risk assessment data to determine the sector’s overall risk profile, which will be included in the 2015 sector-specific plan. At the time of our review, the updated sector-specific plan was still in draft. The U.S. Department of Agriculture (USDA) and the Department of Health and Human Services’ Food and Drug Administration (FDA), in collaboration with their sector stakeholders, determined that the significance of cyber risk was low for the food and agriculture sector when the SSP was developed in 2010. As stated in the plan, the sector did not perceive itself as a target of cyber attack and concluded that, based on the nature of its operations, a cyber attack would pose the risk of only minimal economic disruption. However, the plan acknowledged the rapidly evolving cyber environment and the need to revisit the issue in the future. In July 2015, USDA officials stated that they had reconsidered the significance of cyber risk and the role of cybersecurity in the sector and that it would be reflected in the yet-to-be-released 2015 sector-specific plan. In addition, according to USDA officials, they had completed a sector risk assessment effort with assistance from DHS. The Department of Health and Human Services (HHS), in collaboration with its sector partners, identified cyber risk as significant to the health care and public health sector. Specifically, the 2010 sector-specific plan identified cybersecurity and mitigating risks to the sector’s cyber assets as one of four service continuity goals for the sector. The plan’s cybersecurity risk assessment section identified and categorized common cyber threats, vulnerabilities, consequences, and mitigation strategies for the sector. Also, HHS and its partners added cyber infrastructure protection as a research and development priority in the sector-specific plan. In addition, health care entities, such as health plans and providers that maintain health data, must assess risks to cyber-based systems based on Health Insurance Portability and Accountability Act of 1996 security requirements. DHS, in collaboration with information technology sector stakeholders, identified cyber risk as a sector priority. DHS and its sector partners determined that the consequences of cyber incidents or events would be of great concern and would affect the sector’s ability to produce or provide critical products and services. DHS worked with public and private information technology stakeholders to complete the Information Technology Sector Baseline Risk Assessment in 2009. The risk assessment focused on risks to the processes involved in the creation of IT products and services and critical IT functions including research and development, manufacturing, distribution, upgrades, and maintenance— and not on specific organizations or assets. DHS and its nuclear sector stakeholders prioritized cyber risk as a significant risk for the nuclear sector. According to the 2011 Roadmap to Enhance Cyber Systems Security in the Nuclear Sector, they determined that the cyber systems supporting the nuclear sector are at risk due to the increasing volume, complexity, speed, and connectedness of the nuclear sector’s systems. Therefore, DHS and its sector partners included protecting against the exploitation of the sector’s cyber assets, systems, and networks among its sector goals and objectives for a comprehensive protective posture. Addressing cyber risk is a significant priority for the transportation systems sector. In the 2010 transportation systems sector-specific plan, DHS’s Transportation Security Administration (TSA) and U.S. Coast Guard acknowledged the importance of cyber assets to the sector’s operations across the various transportation modes and included an overview of the risk management framework, an all-hazards approach to be applied to the physical, human, and cyber components of the infrastructure. They also established goals and objectives to shape their sector partners’ approach for managing sector risk. As part of their objective to enhance the all-hazard preparedness and resilience of the transportation systems sector, they included the need to identify critical cyber assets, systems, and networks and implement measures to address strategic cybersecurity priorities. For fiscal year 2014, TSA assessed risks to the transportation systems sector and reported the outcome to Congress. Although the assessment did not specifically quantify cyber risks for the sector, it considered cyber threats to transportation modes in hypothetical scenarios, such as the effect of a cyber attack disabling a public transit system. In addition, TSA’s Office of Intelligence and Analysis provides transportation mode- specific annual threat assessments that include malicious cyber activity as part of the analysis. For example, the pipeline modal threat assessment considered computer network attacks that could disrupt pipeline functions and computer network exploitations that could allow unauthorized network access and theft of information. In addition, we have previously reported that the Coast Guard needs to address cybersecurity in the maritime port environment by, among other things, including cyber risks in its biennial maritime risk assessment. Subsequently, the Coast Guard released its updated risk assessment for maritime operations, which identified the need to address cyber risk but did not identify vulnerabilities in relevant cyber assets. The Environmental Protection Agency (EPA), in collaboration with sector partners, determined that a cyber attack is a significant risk to the water sector. Cyber attacks on the industrial control systems are among the plausible hazards that threaten the water and wastewater systems sector, according to the risk assessment portion of the 2010 sector-specific plan. EPA concluded that attacks on the systems used to monitor and control water movement and treatment could disrupt operations at water and wastewater facilities, although the capability to employ manual overrides for critical systems could reduce the consequences of an attack. EPA recommended that water sector facilities regularly update or conduct an all-hazards risk assessment that includes cyber attacks as a priority threat. Further, the Roadmap to a Secure and Resilient Water Sector, developed in 2013 by EPA, DHS, and water sector partners, included advancing the development of sector-specific cybersecurity resources as a top priority for the sector. Sector-specific agencies generally took actions to mitigate cyber risks and vulnerabilities for their respective sectors that address the Call to Action steps in the National Infrastructure Protection Plan. While the steps are not required of the SSAs, they are intended to guide national progress while allowing for differing priorities in different sectors. The SSAs had taken action to address most of the nine NIPP Call to Action steps. While SSAs for 12 of the 15 sectors had not identified incentives to promote cybersecurity in their sectors, as called for by one of the Call to Action steps, all the SSAs have participated in a working group to identify appropriate incentives to encourage cybersecurity improvements across their respective sectors. In addition, SSAs for 3 of 15 sectors had not yet made significant progress in advancing cyber-based research and development within their sectors because it had not been an area of focus for their sector. DHS guidance for updating the sector-specific plans directs the SSAs to incorporate the NIPP’s actions to guide their cyber risk mitigation activities including cybersecurity-related actions to identify incentives and promote research and development. Figure 1 depicts NIPP Call to Action steps addressed by SSAs. (App. II provides further details on actions taken to address the Call to Action steps for each sector.) DHS implemented activities to mitigate the cyber risks for the chemical sector for eight of nine of the NIPP’s Call to Action steps; however, it had not established incentives to encourage its sector partners to voluntarily invest in cybersecurity-enhancing measures. DHS has developed technical resources, cybersecurity awareness tools, and information- sharing mechanisms among its activities to enhance the sector’s cybersecurity. DHS officials described other cybersecurity activities in development including updates to sector cybersecurity guidance that could include incentives; however, they were unable to identify specific incentives to encourage cybersecurity across the sector. DHS conducted cyber mitigation activities that aligned with eight of the nine NIPP Call to Action steps for the commercial facilities sector. DHS provided technical assistance and supported information-sharing efforts for the sector. For example, it developed a risk self-assessment tool in conjunction with sector partners to raise awareness of the importance of their cyber systems. DHS also promoted a number of information-sharing mechanisms available through its Office of Cybersecurity and Communications, including the dissemination of alerts through the U.S. Computer Emergency Readiness Team (US-CERT), ICS-CERT, and the Commercial Facilities Cyber Working Group, among others. However, DHS did not identify efforts to establish incentives to encourage commercial facilities sector partners to implement cybersecurity- enhancing measures. DHS worked to reduce risk to the communications sector through collaborative cyber risk mitigation activities that align with eight of nine NIPP Call to Action steps. However, DHS did not establish incentives to promote cybersecurity for the sector. As previously stated, DHS and its communications sector partners completed the 2012 National Sector Risk Assessment for Communications, which examined risks from cyber incidents or events that threaten the sector’s cyber assets, systems, and networks. According to DHS officials, it coordinated mitigation activities with its communications sector partners and addressed risks identified through the assessment process. In addition, officials explained that it implemented or facilitated sector-wide information-sharing mechanisms with such entities as the National Cybersecurity and Communications Integration Center, National Infrastructure Coordinating Center, and National Coordinating Center for Telecommunications and Communications Information Sharing and Analysis Center. Although DHS had not implemented specific cyber-related incentives for the communications sector, DHS officials stated that National Security staff and the Office of Policy have been working on possible national incentives such as tax credits for future use. DHS focused cyber risk mitigation activities in seven of nine NIPP Call to Action steps for the critical manufacturing sector. However, cyber risk mitigation activities did not include efforts to incentivize cybersecurity or support cybersecurity-related research and development. Among its cyber risk mitigation activities, DHS participated in information sharing efforts through the sector coordinating council to enhance situational awareness; and led outreach efforts to encourage diverse (i.e., small, medium, and large companies) participation in the council as an activity to build national capacity. Although specific incentives to encourage cybersecurity across the sector had not been put in place, DHS officials stated that they had been involved in a working group to study possible options such as cyber insurance. While the critical manufacturing sector-specific plan and associated annual report of sector activities indicated that goals and needs regarding sector research and development are areas for future development, DHS did not provide any examples of specific research and development activities addressing the sector’s cybersecurity. DHS developed cyber risk mitigation activities for the dams sector focused on eight of nine NIPP Call to Action steps. However, DHS did not identify activities leveraging incentives to advance security and resilience. DHS officials stated that their efforts had not focused on incentives. Among its cyber risk mitigation activities, DHS officials facilitated the development of the Dams Sector Roadmap to Secure Control Systems, developed in 2010, which focuses on the cybersecurity of industrial control systems where cyber risks maybe more significant for individual entities. DHS also supported information-sharing mechanisms by promoting sector-wide information sharing and organized a cybersecurity working group to discuss cyber-relevant topics during quarterly meetings. Further, the department disseminated cyber vulnerability information to sector partners through advisories and alerts from DHS’s ICS-CERT and US-CERT. DOD devised cyber risk mitigation activities that align with eight of nine NIPP Call to Action steps but had not established incentives to promote cybersecurity. Cyber risk mitigation activities included sharing threat information and mitigation strategies for enhanced situational awareness and participating in DOD-centric exercises, among others. Although DOD did not identify specific incentives to encourage cybersecurity in the defense industrial base sector, DOD officials stated that they joined an interagency effort to explore various incentives that might be offered to industry to encourage use of the NIST Cybersecurity Framework. In addition, DOD officials noted that they have worked with the General Services Administration to develop strategic guidelines to incorporate cybersecurity standards in requirements for DOD contractors; however, this effort would not be part of DOD’s voluntary sector cybersecurity program. DHS established or facilitated cyber risk mitigation activities for eight of nine NIPP Call to Action steps; however, it had not instituted cybersecurity incentives. DHS officials stated that grants to state and local governments as incentives to encourage cybersecurity were not available, and no other types of incentives were identified. Among its activities, the department collaborated with emergency services sector partners in March 2014 to develop the Emergency Services Sector Roadmap to Secure Voice and Data Systems, which identified and discussed proposed risk mitigation activities and included justification for the response, sector context, barriers to implementation, and suggestions for implementation. DHS officials also noted various information-sharing mechanisms that disseminate cyber threat and vulnerability information to sector partners and allow reporting back to DHS. DOE instituted or supported cyber risk mitigation activities that correspond to all nine of the NIPP Call to Action steps. For example, DOE provided grants to share the costs of sector partners’ cybersecurity innovation efforts as an incentive for advancing cybersecurity and to support research and development of solutions to improve critical infrastructure security and resilience. Other activities to encourage cybersecurity in the sector included the development of cybersecurity guidance to promote the use of NIST’s Cybersecurity Framework and establishing or supporting cyber threat information sharing mechanisms. DOE also developed and implemented the Cybersecurity Risk Information Sharing Program, a public-private partnership to facilitate the timely sharing of cyber threat information and develop situational awareness tools to enhance electric utility companies’ ability to identify, prioritize, and coordinate the protection of their critical infrastructure. The Department of the Treasury implemented or facilitated activities that served to mitigate cyber risk for the financial services sector. These activities correspond to eight of the nine NIPP Call to Action steps. However, Treasury had not developed incentives to encourage cybersecurity in the sector through its voluntary critical infrastructure protection program. Treasury officials noted that they foresee developing incentives as a result of a report to the President pursuant to an Executive Order 13636 requirement that outlined an approach for policymakers to evaluate the benefits and relative effectiveness of government incentives in promoting adoption of NIST’s Cybersecurity Framework. Using the results of the updated sector planning process to inform its efforts could assist Treasury in developing any such incentives, as appropriate. We have previously reported on additional efforts to address cyber risk in this sector. In July 2015, we reported on cyber attacks against depository institutions, banking regulators’ oversight of cyber risk mitigation activities, and the process for sharing cyber threat information. Specifically, we found that smaller depository institutions were greater targets for cyber attacks. Also, we noted that although financial regulators devoted considerable resources to overseeing information security at larger institutions, their limited IT staff resources generally meant that examiners with little or no IT expertise were performing IT examinations at smaller institutions. As a result, we recommended that these regulators collect and analyze additional trend information that could further increase their ability to identify patterns in problems across institutions and better target their reviews. Finally, with cyber threat information coming from multiple sources, including from Treasury and other federal entities, recipients contacted in the review found federal information repetitive, not always timely, and not always readily usable. To help address these needs, Treasury had various efforts under way to obtain such information and confidentially share it with other institutions, including participating in groups that monitor and provide threat information on cyber incidents. USDA and FDA, as co-SSAs for the food and agriculture sector, had cyber risk mitigation activities addressing six of the nine NIPP Call to Action steps. For example, the SSAs had encouraged sector-wide participation in DHS’s program to promote NIST’s Cybersecurity Framework, participated in the process to identify any cyber-dependent critical functions and services, and supported threat briefings to enhance situational awareness across the sector. According to food and agriculture SSA officials, they had other activities in progress including facilitated sessions with their sector stakeholders as part of assessing risks to the sector and considering the development of food and agriculture sector-specific NIST Cybersecurity Framework implementation guidance to make the framework more relatable to food and agriculture stakeholders. However, other areas, including incentives to promote cybersecurity, research and development of security and resilience solutions, and lessons learned from exercises and incidents, have yet to be developed. As stated earlier, during the 2010 sector-specific planning process, cybersecurity risk was not considered significant for the sector, but USDA and FDA officials stated that they had incorporated cyber risk into their updated sector-specific plan and they continue to develop cybersecurity- related activities for the sector. HHS developed or supported activities addressing eight of the nine NIPP Call to Action steps. For example, HHS leveraged the private sector clearance program and access to classified information as incentives for sector stakeholders to participate in cybersecurity-enhancing activities. However, HHS had not performed any activities related to cybersecurity research and development. HHS officials stated that promoting research and development efforts to enhance the sector’s cybersecurity was not a focus of their cyber risk mitigation activities during fiscal years 2014 and 2015. DHS, in collaboration with its information technology sector partners, implemented risk mitigation activities to enhance the sector’s cybersecurity environment. We identified activities that addressed eight of nine NIPP Call to Action steps. DHS’s IT sector cyber risk mitigation activities included the promotion of incident response and recovery capabilities, support for various cyber-related information sharing mechanisms, and capabilities for technical assistance to sector entities. However, DHS had not specifically identified and analyzed incentives to improve cybersecurity within the IT sector. DHS officials stated that they have collaborated with other federal agencies to develop options for cybersecurity enhancement incentives for the sector. DHS carried out risk mitigation activities that addressed eight of the nine NIPP Call to Action steps. These activities included collaborative efforts through established working groups and councils to share information about cybersecurity-related alerts, advisories, and strategies. DHS officials responsible for nuclear SSA efforts referred to the Roadmap to Enhance Cyber Systems Security in the Nuclear Sector as guidance they developed in June 2011 and disseminated to sector partners for determining cyber risk and a vision for mitigating it over a 15-year period. However, DHS’s cyber risk mitigation activities did not include incentives for nuclear sector partners to enhance cybersecurity. The Department of Transportation and DHS’s TSA and U.S. Coast Guard put in place cyber risk mitigation activities in line with all nine NIPP Call to Action steps. For example, TSA shared cyber threat intelligence and information from the National Cybersecurity and Communications Integration Center to multiple transportation modes through its threat dissemination channels. In addition, classified information had been “tearlined” or downgraded based on a request from TSA so that information could be shared without sharing sensitive and restricted information to sector officials without security clearances. Further, the U.S. Coast Guard used its Port Security Grant Program as an incentive for cybersecurity efforts through its Port Security Grants Program for the maritime subsector. This DHS grants program provides funding for maritime transportation security measures including cybersecurity. However, as we have previously reported, this program did not always make use of cybersecurity-related expertise and other information in allocating grants. Accordingly, we recommended that the program take steps to make better-informed funding decisions. In addition, TSA officials stated that they have participated in working groups to identify other cybersecurity-related incentives across the various transportation modes. EPA incorporated cyber risk mitigation activities that aligned with eight of the nine NIPP Call to Action steps. Specifically, EPA had not established incentives to encourage sector partners to enhance their security and resiliency. EPA officials stated providing funds to support cybersecurity enhancements would be an incentive for their sector partners; however, they lacked the resources to offer grants to implement security measures. EPA officials also stated that they are working on implementing recommendations from Critical Infrastructure Partnership Advisory Council Water Sector Cybersecurity Strategy Workgroup which include exploring ways to demonstrate how the benefits of implementing cybersecurity enhancements outweigh the costs of cyber incidents as an incentive to encourage investment in cybersecurity improvements. Sector-specific agencies use various collaborative mechanisms to share cybersecurity related information across all of the sectors. Presidential Policy Directive 21 (PPD-21) states that sector-specific agencies are to coordinate with DHS and other relevant federal departments and agencies and collaborate with critical infrastructure owners and operators to strengthen the security and resiliency of the nation’s critical infrastructure. SSAs share information and collaborate across sectors primarily through a number of councils, working groups, and information-sharing centers established by federal entities. The mechanisms identified during our review for SSAs to collaborate across the sectors are summarized, along with the number of sectors represented in each council or group by their respective SSA, in table 6. The mechanisms provide SSAs opportunities to interact, collaborate, and coordinate with one another. For example, each of the sectors we reviewed used working groups created under the Critical Infrastructure Partnership Advisory Council. According to the CIPAC 2013 annual report, in 2012 there were 60 working groups that held approximately 200 meetings with objectives such as information sharing, training and exercises, and risk management. In addition, SSAs used their respective government coordinating councils to coordinate with other SSAs about interdependencies and to gain access to needed expertise about the operations of other sectors. For example, DHS officials stated that the communications sector’s government coordinating council membership provides the expertise necessary to fulfill the council’s mission. They stated that its current membership includes representatives from the DOD, DOE and Treasury, among others, and from multiple DHS components. Further, SSAs continually referred to the Cross-Sector Cyber Security Working Group and the Industrial Control System Joint Working Group as two of the main cybersecurity-related collaborative opportunities for federal agencies. Both of these working groups facilitate government sharing of information among officials representing different sectors. The Cross-Sector Cyber Security Working Group operates under DHS’s Office of Cybersecurity and Communications. It provides the SSAs the opportunity to establish and maintain cross-sector partnerships; work on cross-cutting issues, such as incentives to encourage cybersecurity actions; and identify cyber dependencies and interdependencies that allow them to share information on cybersecurity trends that can affect their respective sectors. According to DHS, more than 100 members attend monthly meetings to share information and activities about their respective sectors. Of the SSAs representing the 15 sectors we reviewed, SSAs for 14 sectors indicated in their documentation or statements that they were active participants in this working group. The Industrial Control System Joint Working Group was established by DHS’s Industrial Control Systems Cyber Emergency Response Team to facilitate information sharing and reduce the risk to the nation’s industrial control systems. According to DHS, the goal of this working group is to continue and enhance the collaborative efforts of the industrial control systems stakeholder community by accelerating the design, development, and deployment of secure industrial control systems. SSAs for 12 of the 15 sectors within the scope of our review were active participants in the working group. For example, HHS officials stated that they attend the Industrial Control System Joint Working Group meetings as a way to analyze relationships and identify overlapping actions with other sectors. Table 7 provides examples of cross-sector collaboration in relation to the sectors. In addition to the mechanisms identified above, further collaboration occurred through the co-location of sectors’ SSAs within one department. DHS, as the SSA for eight critical infrastructure sectors, has six of the sectors assigned to officials under the Infrastructure Protection group, and two under the Cybersecurity and Communications group. DHS’s Office of Infrastructure Protection officials representing several SSAs stated that they leverage DHS’s Office of Cybersecurity and Communications capabilities and resources for their sectors. Further, housing these responsibilities within the same organization provided efficiencies for their respective critical infrastructure sectors. For example, according to documentation for the critical manufacturing sector SSA, officials are leveraging training curricula produced by other Office of Infrastructure Protection SSA officials. Additionally, DHS had co-located both the National Cybersecurity and Communications Integration Center and National Infrastructure Coordinating Center, which brings two 24x7 watch centers together as they share physical and cyber information related to critical infrastructure. Finally, SSAs used the Homeland Information Sharing Network (HSIN) sector pages to collaborate across sectors. The HSIN is a network for homeland security mission operations to share sensitive but unclassified information, including with the critical infrastructure community. It is to provide real-time collaboration tools including a virtual meeting space, document sharing, alerts, and instant messaging. Officials from SSAs associated with 14 of the 15 sectors stated that they used HSIN to share information with stakeholders within their respective sectors. For example, within the dams HSIN portal, the sector implemented a Suspicious Activity Report online tool to provide users with the capability to report and retrieve information pertaining to suspicious activities that could compromise the facility or system in a manner that would cause an incident jeopardizing life or property. Additionally, officials from the chemical sector stated that they use HSIN for the coordination of cybersecurity incidents within the sector and officials from the critical manufacturing SSA stated that when entities from their sector reach out to them for more information on threats or alerts, they direct them to subscribe to the critical manufacturing HSIN page. The NIPP includes guidance to SSAs to focus on the outcomes of their security and resilience activities. Specifically, as noted earlier, one of the NIPP Call to Action steps directs SSAs and their sector partners to identify high-level outcomes to facilitate evaluation of progress toward national goals and priorities, including securing critical infrastructure against cyber threats. In addition, the NIPP risk management framework, used as a basis for the sector-specific plans, includes measuring the effectiveness of the SSAs’ risk mitigation activities as a method of monitoring sector progress. Among the SSAs, DOD, DOE, and HHS had established performance metrics to monitor cybersecurity-related activities, incidents, and progress in their sectors. DOD monitored cybersecurity for the defense industrial base sector through reports of cyber incidents and cyber incidents that were blocked; reports from owners and operators regarding efforts to execute the sector-specific plan’s implementation actions; and the number of cyber threat products disseminated by DOD to cleared companies and the timeliness of shared threat information. DOD also prepared annual reports for Congress for fiscal years 2010 through 2014 that provided information on sector performance metrics. DOE developed the ieRoadmap, an interactive tool designed to enable energy sector stakeholders to map their energy delivery system cybersecurity efforts to specific milestones identified in the Roadmap to Achieve Energy Delivery Systems Cybersecurity. DOE also established the Cybersecurity Capability Maturity Model program to support ongoing development and measurement of cybersecurity capabilities. The voluntary program provides a mechanism for measuring cybersecurity capabilities from a management and program perspective. HHS monitored cybersecurity metrics such as the number of subscribers to receive its security alerts and incidents of health information security breaches. The Health Information Technology for Economic and Clinical Health (HITECH) Act requires that health care data breaches be reported to the affected individuals and HHS, compiled in an annual HHS report to Congress, and for breaches affecting 500 or more individuals, shared with the media. HHS officials stated that they use the information on data breaches as an indicator of cybersecurity-related trends for the sector. However, SSAs for the other 12 sectors had not developed or reported performance metrics, although some had efforts under way to do so. For selected sectors, including financial services and water and wastewater systems, SSAs emphasized that they rely on their private sector partners to voluntarily share information and so are challenged in gathering the information needed to measure efforts. Sector stakeholders are not necessarily willing to openly share potentially sensitive cybersecurity- related information. Also, the DHS guidance to the SSAs for updating their sector-specific plans includes directions to create new metrics to evaluate the sectors’ security and resilience progress; however, the plans have not been finalized and released. DHS had not developed performance metrics to monitor the cybersecurity progress for its 8 sectors, although according to agency officials, such efforts are under way. For example, DHS lacked metrics for the chemical sector; however, officials stated that multiple industry working groups were working on cyber performance metrics to measure progress at a very high level. In addition, in 2011, a nuclear cybersecurity roadmap document was released that outlined milestones and specific cybersecurity goals for the sector over a 15-year period, including the need for metrics to measure and assess the sector’s cybersecurity posture. The nuclear sector roadmap provides near-, mid-, and long-term goals but not specific measures or criteria to assess the sector’s cybersecurity posture. Further, according to DHS officials, a number of initiatives were begun to gather performance-related information, including the following: DHS’s Programmatic Planning and Metrics Initiative was established in October 2014 to gather data from the department’s sectors and monitor their cybersecurity process. However, as of the time of our review, the initiative had only limited historical data. DHS’s Sector Outreach and Programs Division plans to implement program metrics to measure and analyze adoption of cybersecurity practices and NIST’s Cybersecurity Framework across the sectors. DHS officials for the information technology and communications sectors stated that they had proposed performance metrics to be implemented through 2018. In a review of cybersecurity related to the nation’s communications networks, we reported that DHS and its partners had not developed outcome-based metrics related to the cyber-protection activities for the communications sector. We recommended that DHS and its sector partners develop, implement, and track sector outcome-oriented performance measures for cyber protection activities related to the nation’s communications networks. Regarding the financial services sector, Treasury officials stated that the department does not have performance metrics to chart the sector’s cybersecurity-related progress. However, according to Treasury officials, the sector coordinating council is working with the Financial and Banking Information Infrastructure Committee to identify metrics to evaluate progress in the sector. According to the officials, identifying actionable metrics based on cyber risk mitigation programs is a challenge. Treasury officials emphasized that the information needed is privately owned and may or may not be voluntarily shared with government partners. The food and agriculture 2010 sector-specific plan stated that the sector did not have metrics to measure the effectiveness of risk mitigation efforts, although it acknowledged the need to establish tracking and monitoring mechanisms. The plan also noted that sector partners, including state agencies and private industry, may view reporting programmatic data as a burden and question the security of the data once reported. In December 2014, USDA officials noted that they do not have formal mechanisms to measure sector progress, although survey results collected through food safety inspection activities have some security elements. The ongoing process to update the sector-specific plan provides USDA and HHS an opportunity to consider possible performance metrics for monitoring the sector’s cybersecurity progress. The transportations systems sector SSAs had also not instituted mechanisms to evaluate the progress of sector entities in achieving a more secure sector. For example, TSA officials stated that they are developing cyber metrics in line with the 2014 Sector-Specific Plan Guidance; however, the officials noted that their industry partners are reluctant to share information needed to monitor improvement in the sector because they fear regulation. Finally, EPA does not collect performance information to provide metrics on the effectiveness of its cybersecurity programs for the water sector. Agency officials noted that the lack of statutory authority is a major challenge to collecting performance metrics data. In the absence of statutory authority or agency policy, EPA must work with water sector associations to collect the information across the sector. However, water utilities may be reluctant to voluntarily report security information to EPA. EPA is also working with the Water Sector Coordinating Council to identify performance metrics for implementation of NIST’s Cybersecurity Framework in the water sector, according to agency officials. Until SSAs develop performance metrics and collect data to report on the progress of the sector-specific agencies’ efforts to enhance the sectors’ cybersecurity posture, they may be unable to adequately monitor the effectiveness of their cyber risk mitigation activities and document the resulting sector-wide cybersecurity progress. Overall, SSAs are acting to address sector cyber risk, but additional monitoring actions could enhance their respective sectors’ cybersecurity posture. Most SSAs had identified the significance of cyber risk to their respective sectors as part of the 2010 sector-specific planning process with four sectors concluding that cyber risk was not significant at that time, but subsequently reconsidering the significance of cyber risks to their sectors. However, to prepare the 2015 updates to their sector- specific plans, the planning guidance directed the SSAs to address their current and emerging sector risks including the cyber risk landscape and key trends shaping their approach to managing risk. Toward this end, all of the SSAs had generally performed cyber risk mitigation activities that address the NIPP’s Call to Actions steps and regarding incentives— one area not addressed by most of the SSAS— efforts had begun to determine appropriate ways to encourage additional cybersecurity-related efforts across the nation’s critical infrastructures. To their credit, SSAs are engaged in multiple public-private and cross sector collaboration mechanisms that facilitate the sharing of information, including cybersecurity-related information. However, most SSAs have not developed metrics to measure and improve the effectiveness of all their cyber risk mitigation activities and their sectors’ cybersecurity posture. As a result, SSAs may not be able to adequately monitor and document the benefits of their activities in improving the sectors’ cybersecurity posture or determine how those efforts could be improved. To better monitor and provide a basis for improving the effectiveness of cybersecurity risk mitigation activities, we recommend that, informed by the sectors’ updated plans and in collaboration with sector stakeholders, the Secretary of Homeland Security direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the chemical, commercial facilities, communications, critical manufacturing, dams, emergency services, information technology, and nuclear sectors’ cybersecurity progress; Secretary of the Treasury direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the financial services sector’s cybersecurity progress; Secretaries of Agriculture and Health and Human Services (as co- SSAs) direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the food and agriculture sector’s cybersecurity progress; Secretaries of Homeland Security and Transportation (as co-SSAs) direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the transportation systems sector’s cybersecurity progress; and Administrator of the Environmental Protection Agency direct responsible officials to develop performance metrics to provide data and determine how to overcome challenges to monitoring the water and wastewater systems sector’s cybersecurity progress. We provided a draft of this report to the Departments of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, Transportation, and the Treasury and to EPA. In written comments signed by the Director, Departmental GAO-OIG Liaison Office (reprinted in app. III), DHS concurred with our two recommendations. DHS also provided details about efforts to address cybersecurity in the sectors for which DHS has responsibility as the SSA. DHS also stated that it supports the intent of the recommendation to improve cybersecurity, including efforts to develop performance metrics. Further, in regard to the transportation sector specifically, DHS stated that the Transportation Security Administration and the United States Coast Guard would work in collaboration with the Department of Transportation to ensure that cybersecurity is at the forefront of their voluntary partnership. In written comments signed by the Department of the Treasury’s Acting Assistant Secretary for Financial Institutions (reprinted in app. IV), the department stated that monitoring the sector’s cybersecurity progress is a critical component of the sector’s efforts to reduce cybersecurity risk and discussed efforts with the department’s partners to improve the sector’s ability to assess progress and develop metrics. In written comments signed by EPA’s Deputy Assistant Administrator (reprinted in app. V), EPA generally agreed with our recommendation and discussed efforts to develop cybersecurity performance metrics for the water and wastewater systems sector. The Department of Transportation’s Director of Program Management and Improvement stated in an e-mail that the department concurred with our findings and our recommendation directed to the Secretary of Transportation and stated that it would continue to work with DHS to improve cyber risk mitigation activities and strengthen the transportation sector’s cybersecurity posture. If effectively implemented, the actions identified by these departments should help address the need to better measure cybersecurity progress in the sectors. The Departments of Agriculture and Health and Human Services did not comment on the recommendations made to them. In addition, officials from the Departments of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, and the Treasury and EPA also provided technical comments via e-mail that have been addressed in this report as appropriate. The Department of Transportation did not have technical comments for the report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, Energy, Health and Human Services, Homeland Security, Transportation, and the Treasury; the Administrator of the Environmental Protection Agency; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to determine the extent to which sector-specific agencies (SSA) have (1) identified the significance of cyber risks to their respective sectors’ networks and industrial control systems, (2) taken actions to mitigate cyber risks within their respective sectors, (3) collaborated across sectors to improve cybersecurity, and (4) established performance metrics to monitor improvements in their respective sectors. To conduct our evaluation, we analyzed relevant critical infrastructure protection policies and guidance for improving the cybersecurity posture of the nation’s critical infrastructure. Based on these analyses, we identified nine federal agencies designated as the sector-specific agencies for the critical infrastructure sectors. For this review, we focused on eight of the nine sector-specific agencies responsible for 15 of the 16 critical infrastructure sectors. We included the 15 sectors that involve private sector stakeholders in their efforts to implement activities to address sector security and resiliency goals. We excluded the General Services Administration, the sector-specific agency for the government facilities sector, as the sector is uniquely governmental with facilities either owned or leased by government entities. See Table 8 for the sectors and sector-specific agencies included in our review. To determine how sector-specific agencies prioritized cyber risks, we analyzed their efforts to identify and document cyber risks. We reviewed the risk assessment methodologies employed as documented in the 2010 sector-specific plans and other supplementary documentation such as formal risk assessments, strategy documents, and annual reports. We also interviewed officials responsible for carrying out the sector-specific agency roles and responsibilities to further understand their determination of the significance of cyber-related risks to their respective sectors. To identify SSAs’ activities to mitigate cyber risks, we compared sector- specific planning documents and actions to fulfill roles and responsibilities as identified in federal policy and the 2013 National Infrastructure Protection Plan (NIPP) Call to Action steps related to cyber risks. The NIPP steps are suggested practices to guide sector-specific agencies’ actions. The NIPP presented a total of 12 steps; however, we excluded 2 steps that we determined did not have a cybersecurity-related nexus. We analyzed the latest sector-specific plans, which were released in 2010, and other sector-specific planning documents including risk assessments and strategies for each of the sectors. We also interviewed officials from the SSAs and obtained related documentation to identify cyber risk mitigation activities. Additionally, we interviewed private sector stakeholders representing the sector coordinating councils to corroborate the sector-specific agencies cyber risk mitigation activities. We used all of this information to determine the extent to which each of the sector- specific agencies conducted activities for the 9 of the NIPP Call to Action steps. To determine the extent of the sector-specific agencies’ collaborative efforts to enhance their sectors’ cybersecurity environment, we reviewed documentation related to the collaboration mechanisms utilized by the sector-specific agencies. We also identified the collaborative groups, councils, and working groups that were utilized most frequently by SSAs to share cybersecurity-related information across the sectors. We analyzed documentation of cross-sector collaboration from the sector, government, and cross-sector coordinating councils. Additionally, we interviewed SSA officials and private sector stakeholders representing the sector coordinating councils. To identify performance measures used by SSAs to monitor cybersecurity in their respective sectors, we analyzed the sector-specific plans and cybersecurity-related performance reporting documents and interviewed SSA officials. We reviewed performance evaluation guidance related to national security and resiliency goals provided to the SSAs for past and future planning efforts. Additionally, we reviewed past sector annual reports, which tracked actions of the sector against goals established in the 2010 sector-specific plans, as well as strategic documents or roadmaps used to track sector performance. We reviewed reports of cyber incidents and data breaches provided as examples of indicators for SSAs to monitor sector cybersecurity. We also interviewed private sector partners to identify sources of cybersecurity-related data being reported to the sector-specific agencies. We conducted this performance audit from June 2014 to November 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides further details on cyber risk mitigation activities sector-specific agencies (SSA) developed for the 15 sectors in our review based on analysis of documentation and statements from SSA officials. Tables 9 through 23 below show, for each sector, SSA actions that aligned with the 2013 National Infrastructure Protection Plan (NIPP) Call to Action Steps. In addition to the contact named above, Michael W. Gilmore, Assistant Director; Kenneth A. Johnson; Lee McCracken; David Plocher; Di’Mond Spencer; Jonathan Wall; and Jeffrey Woodward made key contributions to this report. | U. S. critical infrastructures, such as financial institutions, commercial buildings, and energy production and transmission facilities, are systems and assets, whether physical or virtual, vital to the nation's security, economy, and public health and safety. To secure these systems and assets, federal policy and the NIPP establish responsibilities for federal agencies designated as SSAs, including leading, facilitating, or supporting the security and resilience programs and associated activities of their designated critical infrastructure sectors. GAO's objectives were to determine the extent to which SSAs have (1) identified the significance of cyber risks to their respective sectors' networks and industrial control systems, (2) taken actions to mitigate cyber risks within their respective sectors, (3) collaborated across sectors to improve cybersecurity, and (4) established performance metrics to monitor improvements in their respective sectors. To conduct the review, GAO analyzed policy, plans, and other documentation and interviewed public and private sector officials for 8 of 9 SSAs with responsibility for 15 of 16 sectors. Sector-specific agencies (SSA) determined the significance of cyber risk to networks and industrial control systems for all 15 of the sectors in the scope of GAO's review. Specifically, they determined that cyber risk was significant for 11 of 15 sectors. Although the SSAs for the remaining four sectors had not determined cyber risks to be significant during their 2010 sector-specific planning process, they subsequently reconsidered the significance of cyber risks to the sector. For example, commercial facilities sector–specific agency officials stated that they recognized cyber risk as a high-priority concern for the sector as part of the updated sector planning process. SSAs and their sector partners are to include an overview of current and emerging cyber risks in their updated sector-specific plans for 2015. SSAs generally took actions to mitigate cyber risks and vulnerabilities for their respective sectors. SSAs developed, implemented, or supported efforts to enhance cybersecurity and mitigate cyber risk with activities that aligned with a majority of actions called for by the National Infrastructure Protection Plan (NIPP). SSAs for 12 of the 15 sectors had not identified incentives to promote cybersecurity in their sectors as proposed in the NIPP; however, the SSAs are participating in a working group to identify appropriate incentives. In addition, SSAs for 3 of 15 sectors had not yet made significant progress in advancing cyber-based research and development within their sectors because it had not been an area of focus for their sector. Department of Homeland Security guidance for updating the sector-specific plans directs the SSAs to incorporate the NIPP's actions to guide their cyber risk mitigation activities, including cybersecurity-related actions to identify incentives and promote research and development. All SSAs that GAO reviewed used multiple public-private and cross-sector collaboration mechanisms to facilitate the sharing of cybersecurity-related information. For example, the SSAs used councils of federal and nonfederal stakeholders, including coordinating councils and cybersecurity and industrial control system working groups, to coordinate with each other. In addition, SSAs participated in the National Cybersecurity and Communications Integration Center, a national center at the Department of Homeland Security, to receive and disseminate cyber-related information for public and private sector partners. The Departments of Defense, Energy, and Health and Human Services established performance metrics for their three sectors. However, the SSAs for the other 12 sectors had not developed metrics to measure and report on the effectiveness of all of their cyber risk mitigation activities or their sectors' cybersecurity posture. This was because, among other reasons, the SSAs rely on their private sector partners to voluntarily share information needed to measure efforts. The NIPP directs SSAs and their sector partners to identify high-level outcomes to facilitate progress towards national goals and priorities. Until SSAs develop performance metrics and collect data to report on the progress of their efforts to enhance the sectors' cybersecurity posture, they may be unable to adequately monitor the effectiveness of their cyber risk mitigation activities and document the resulting sector-wide cybersecurity progress. GAO recommends that certain SSAs collaborate with sector partners to develop performance metrics and determine how to overcome challenges to reporting the results of their cyber risk mitigation activities. Four of these agencies concurred with GAO's recommendation, while two agencies did not comment on the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In the last several decades, the Congress has passed legislation to increase federal agencies’ ability to determine the health and environmental risks associated with toxic chemicals and to address such risks. Some of these laws, such as the Clean Air Act, the Clean Water Act; the Federal Food, Drug and Cosmetic Act; and the Federal Insecticide, Fungicide, and Rodenticide Act; authorize the control of hazardous chemicals in, among other things, the air, water, soil, food, drugs, and pesticides. Other laws, such as the Occupational Safety and Health Act and the Consumer Product Safety Act, can be used to protect workers and consumers from unsafe exposures to chemicals in the workplace and the home. These laws were generally enacted in or before the early 1970s. Nonetheless, the Congress found that human beings and the environment were being exposed to a large number of chemicals and that some could pose an unreasonable risk of injury to health or the environment. In 1976, the Congress passed TSCA to provide EPA with the authority to obtain more information on chemicals and regulate those chemicals that pose an unreasonable risk to human health or the environment. TSCA provides EPA with the authority, upon making certain determinations, to collect information about the hazards posed by chemical substances and to take action to control unreasonable risks by either preventing dangerous chemicals from making their way into commerce or otherwise regulating them, such as by placing restrictions on those already in the marketplace. While other environmental and occupational health laws generally only control the release of chemicals in the environment, exposures in the workplace, or the disposal of chemicals, TSCA allows EPA to control the entire life cycle of chemicals from their production and distribution to their use and disposal. However, the act does not apply to certain substances such as nuclear material, firearms and ammunition, pesticides, food, food additives, tobacco, drugs, and cosmetics. TSCA’s role in ensuring that chemicals in commerce do not present an unreasonable risk of injury to health or the environment is established in six major sections of the act, as shown in table 1. Under section 4, EPA can promulgate rules to require chemical companies to test potentially harmful chemicals for their health and environmental effects. To require testing, EPA must find that a chemical (1) may present an unreasonable risk of injury to human health or the environment or (2) is or will be produced in substantial quantities and that either (a) there is or may be significant or substantial human exposure to the chemical or (b) the chemical enters or may reasonably be anticipated to enter the environment in substantial quantities. (For the remainder of this report, we will refer to parts (a) and (b) of this second finding in abbreviated form as a finding “that there is or may be substantial human or environmental exposure to the chemical”). EPA must also determine that there are insufficient data to reasonably determine or predict the effects of the chemical on health or the environment and that testing is necessary to develop such data. Section 5 requires chemical companies to notify EPA at least 90 days before beginning to manufacture a new chemical or before manufacturing or processing a chemical for a use that EPA has determined by rule is a significant new use. EPA has these 90 days to review the chemical information in the premanufacture notice and identify the chemical’s potential risks. Under section 5(e), if EPA determines that there is insufficient information available to permit a reasoned evaluation of the health and environmental effects of a chemical and that (1), in absence of such information, the chemical may present an unreasonable risk of injury to health or the environment or (2) it is or will be produced in substantial quantities and (a) it either enters or may reasonably be anticipated to enter the environment in substantial quantities or (b) there is or may be significant or substantial human exposure to the substance, then EPA can issue a proposed order or seek a court injunction to prohibit or limit the manufacture, processing, distribution in commerce, use, or disposal of the chemical. Under section 5(f), if EPA finds that the chemical will present an unreasonable risk, EPA must act to protect against the risk. If EPA finds that there is a reasonable basis to conclude that a new chemical may pose an unreasonable risk before it can protect against such risk by regulating it under section 6 of TSCA, EPA can (1) issue a proposed rule, effective immediately, to require the chemical to be marked with adequate warnings or instructions, to restrict its use, or to ban or limit the production of the chemical or (2) seek a court injunction or issue a proposed order to prohibit the manufacture, processing, or distribution of the chemical. Section 6 requires EPA to apply regulatory requirements to chemicals for which EPA finds a reasonable basis exists to conclude that the chemical presents or will present an unreasonable risk to human health or the environment. To adequately protect against a chemical’s risk, EPA can promulgate a rule that bans or restricts the chemical’s production, processing, distribution in commerce, disposal or use, or requires warning labels be placed on the chemical. Under TSCA, EPA must choose the least burdensome requirement that will adequately protect against the risk. In promulgating a rule, EPA must consider and publish a statement regarding: the effects of the chemical on health and the environment and the magnitude of human and environmental exposure; the benefits of the chemical for various uses and the availability of substitutes for those uses; and the reasonably ascertainable consequences of the rule, after consideration of the effect on the national economy, small businesses, technological innovation, the environment, and public health. If another law would sufficiently eliminate or reduce the risk of injury to health or the environment, then EPA may not promulgate a TSCA rule unless it finds that it is in the public interest to do so, considering all relevant aspects of the risk, a comparison of the estimated costs of compliance under TSCA and the other law and the relative efficiency of actions under TSCA and the other law to protect against risk of injury. Section 8 requires EPA to promulgate rules under which chemical companies must maintain records and submit such information as the EPA Administrator reasonably requires. This information can include, among other things, chemical identity, categories of use, production levels, by-products, existing data on adverse health and environmental effects, and the number of workers exposed to the chemical. In addition, section 8 provides EPA with the authority to promulgate rules under which chemical companies are required to submit lists or copies of any health and safety studies to EPA. Finally, section 8 requires chemical companies to report any information to EPA that reasonably supports a conclusion that a chemical presents a substantial risk of injury to health or the environment. Section 9 establishes TSCA’s relationship to other laws. The section includes a mechanism for EPA to alert other federal agencies of a possible need to take action if EPA has a reasonable basis to conclude that an unreasonable chemical risk may be prevented or sufficiently reduced by action under a federal law not administered by EPA. Section 9 also requires EPA to use authorities under other laws that it administers if its Administrator finds that a risk to health or the environment could be eliminated or sufficiently reduced under those laws, or unless EPA determines that it is in the public interest to protect against such risks under TSCA. Section 14 details when EPA may disclose chemical information obtained by the agency under TSCA. Chemical companies can claim certain information, such as data disclosing chemical processes, as confidential business information. EPA generally must protect confidential business information against public disclosure unless necessary to protect against an unreasonable risk of injury to health or the environment. Other federal agencies and federal contractors can obtain access to this confidential business information in order to carry out their responsibilities. EPA may also disclose certain data from health and safety studies. While TSCA authorizes EPA to promulgate rules requiring chemical companies to conduct tests on chemicals and submit the resulting data to EPA, TSCA does not require chemical companies to test new chemicals for their toxicity and exposures before they are submitted for EPA’s review and, according to EPA officials, chemical companies typically do not voluntarily perform such testing. In the absence of chemical test data, EPA largely relies on scientific models to screen new chemicals. However, use of the models can present weaknesses in an assessment because models do not always accurately determine the chemicals’ properties and the full extent of their adverse effects, especially with regard to their general health effects. Nevertheless, EPA believes that the models are useful as basic screening tools where actual test data on health and environmental effects information is not available from chemical companies. EPA believes that the models are an effective tool that, in conjunction with other factors, such as premanufacture notice information on the anticipated production levels and uses of a chemical, supplies a reasonable basis for either dropping the chemical from further review or subjecting it to more detailed review and possible controls. EPA routinely updates database sources for models with new data received through premanufacture notice submissions, required testing from consent orders, substantial risk submissions, and voluntary testing. EPA acknowledges, however, that future efforts to obtain additional test data could enhance the models’ usefulness by providing a more robust database for their further development and validation for regulatory purposes. Furthermore, the information in premanufacture notices that EPA uses to assess potential exposures to new chemicals, such as production volume and anticipated uses, are estimates that can change substantially once EPA completes its review and manufacturing begins. Although TSCA authorizes EPA to require a manufacturer to submit a new notice under certain conditions, the agency must first, after consideration of relevant statutory factors, promulgate a significant new use rule in which it identifies significant new uses or activities for which a new notice is required. EPA estimates that most premanufacture notices do not include test data of any type, and only about 15 percent include health or safety test data. Chemical companies do not have an incentive to conduct these tests because they may take over a year to complete, and some tests may cost hundreds of thousands of dollars. During a review of a new chemical, EPA evaluates risks by conducting a chemical analysis, searching the scientific literature, reviewing agency files (including files of related chemicals that have already been assessed by EPA), analyzing toxicity data on structurally similar chemicals, calculating potential releases of and exposures to the chemical, and identifying the chemical’s potential uses. On the basis of this review, EPA makes a decision to (1) take no action; (2) under section 5(e) of TSCA, require controls on the use, manufacture, processing, distribution in commerce, or disposal of the chemical pending development of test data; or (3) ban or otherwise regulate the chemical pending the receipt and evaluation of test studies performed by the chemical’s manufacturer. Because EPA generally does not have sufficient data on a chemical’s properties and effects when reviewing a new chemical, EPA uses a method known as structure activity relationships analysis (SAR) to screen and evaluate a chemical’s toxicity. This method, also referred to as the nearest analogue approach, involves using models to compare new chemicals with chemicals with similar molecular structures for which test data on health and environmental effects are available. EPA applies models where actual test data in general, and health and environmental effects test data in particular, are not available. EPA officials said that the models make conservative predictions that the agency believes result in erring on the side of protecting human health and the environment in screening chemicals. EPA’s own attempts to determine the strength of these models shows them to be highly accurate in predicting some chemical characteristics, but less accurate for other characteristics. For example, in 1993, EPA and the EU jointly conducted a study to compare EPA's predictions of individual physical and chemical properties or health or environmental effects with those identified by the EU based on test data submitted with EU notifications. The joint evaluation showed that the accuracy of EPA’s predictions varied, depending on the effect or the property being compared. For example, the study concluded that EPA methods are likely to identify those substances that are not readily biodegradable—in other words, slowly degrading chemicals. However, the study concluded that EPA methods do not appear to work as well in identifying chemicals that readily degrade as determined by the EU’s “ready biodegration” base set test. The model performance is explained by recognizing that EPA’s model does not focus on ready biodegration but rather on ultimate biodegredation. Since the 1993 study, EPA and others have conducted studies on selected aspects of some of its models, such as a 2001 study conducted by PPG Industries on the accuracy of aquatic toxicity predictions for different types of polymers. This study showed mixed results in that the models proved to be highly accurate for predicting the toxicity of the chemicals tested on rainbow trout, but were in error for about 25 percent of the cases in which the models’ results were compared with actual test data for determining the chemicals’ effects on the growth of aquatic algae, an important environmental end point. EPA officials told us that, while the overall accuracy of the models has not been validated for regulatory purposes, they are effective as screening tools that allow EPA to focus its attention on the chemicals of greatest concern—chemicals about which little is known other than that they are structurally related to known harmful chemicals. By applying approaches that make conservative predictions, EPA believes that it is more likely to identify a false positive (where a chemical is determined to be of concern, but on further analysis is found to be of low concern) than a false negative (where a chemical is initially viewed as a low concern though on further analysis is actually of higher concern). According to EPA, only about 20 percent of the premanufacture notices received annually go through the agency’s more detailed full-review process after they have been initially screened. That is, according to EPA officials, the majority of new chemicals submitted for review can be screened out as not requiring further review because (1) EPA determines on the basis of its screening models that a chemical has potential for low toxicity to human health or environment or (2) on the basis of other information, such as the anticipated uses, exposures, and releases of the chemicals, only limited potential risks to people and the environment are expected. In addition, using these models, EPA identifies for possible regulatory action, those chemicals belonging to certain chemical categories that based on its prior experience in reviewing new chemicals, are likely to pose potential risks such that testing or controls are needed. EPA officials told us that while they take efforts to improve and validate their models for regulatory purposes where opportunities arise (e.g., models are subjected to peer review when significant modifications are introduced in their design or structure), they do not have a specific program to do so. EPA officials stated that they routinely use test data to improve the models as it becomes available but TSCA does not require companies to routinely conduct tests and submit such data to the agency. Unless EPA requires testing under section 4 of TSCA, TSCA only requires chemical companies to provide notice to EPA of information the companies obtain that reasonably supports the conclusion that the chemical presents a substantial risk of injury to health or the environment. Under section 4 of TSCA, EPA may promulgate a rule requiring companies to conduct tests and submit test data but may do so only if it first determines that current data is insufficient; testing is necessary; and that either (1) the chemical may present an unreasonable risk or (2) that the chemical is or will be produced in substantial quantities and that there is or may be substantial human or environmental exposure to the chemical. EPA officials said that chemical companies may have test data that shows that a chemical has low toxicity. These officials also said that such data would be useful for helping to improve the accuracy of their models. EPA has authority under section 8 of TSCA to promulgate rules requiring companies to submit any existing test data concerning the environmental and health effects of a chemical or copies of any health and safety studies conducted or initiated by, or otherwise known by, the chemical company. EPA officials told us that other efforts are under way to validate these models for regulatory purposes. Organization for Economic Co-operation Development (OECD) member countries are undertaking collaborative efforts to develop and harmonize SAR methods for assessing chemical hazards. However, EPA is hampered in its ability to provide supporting test data to aid OECD as part of this effort because confidentiality provisions in TSCA do not allow EPA to share confidential business information submitted by chemical companies with foreign governments. EPA officials said that international efforts to validate SAR models for regulatory purposes and to move toward harmonized international chemical assessments would be improved if EPA had the ability to share this information under appropriate procedures to protect confidentiality. TSCA’s provisions are in contrast to those of the Canadian Environmental Protection Act (CEPA), for example, which authorizes the Canadian Minister of the Environment to share confidential business information with other governments under agreements or arrangements where the government undertakes to keep the information confidential. Chemical industry representatives told us that the industry also sees benefits in allowing countries to share information in order to harmonize chemical assessments among developed countries and improve chemical risk assessment methods by allowing countries to cooperate in improving models used to predict chemical toxicity. The chemical industry is concerned, however, that the confidential information shared be protected from inappropriate disclosure. These chemical industry representatives told us that some countries currently do not have stringent enough procedures for protecting confidential business information. However, they suggested that the policies and procedures EPA currently uses to protect confidential information are appropriate. Accordingly, they said that the chemical industry would not object to TSCA revisions allowing EPA to share confidential information with foreign countries and organizations, provided that such revisions contain specific reference to safeguards that EPA would establish and enforce to ensure that those receiving the information have stringent policies and procedures to protect it. In this regard, chemical industry representatives stated that such policies and procedures should include provisions such as requiring that those who handle confidential information be briefed on the importance of not disclosing the information to those without the proper clearance and keeping such information in locked storage. EPA officials told us that, in addition to assisting international efforts to enhance modeling tools and harmonize international chemical assessments, the ability to share confidential business information with foreign governments would be beneficial for developing a strategy to identify the resources needed to develop and validate new models for regulatory purposes—a measure that is especially important given the continuing central role of scientific models in EPA’s assessment program for new chemicals. These officials also suggested that it would be productive to explore regulatory and voluntary approaches that could be used to obtain additional information from chemical companies on chemical properties and characteristics, including “negative” studies—i.e., evidence that a chemical is not harmful. According to EPA, such information is useful for understanding the chemical and thus for developing and validating models for regulatory purposes. Under TSCA, companies submitting a premanufacture notice must, at the same time, submit data such as anticipated production volume, manufacturing process, and any test data in their possession and a description of any other reasonably ascertainable data concerning the environmental and health effects of the chemical. If EPA feels it needs more information on these chemicals, it could explore promulgating a test rule under section 4 or issuing a proposed order pending the development of information under section 5(e). In addition, as noted above, EPA has authority under section 8 of TSCA to promulgate rules requiring companies to submit any existing test data concerning the environmental and health effects of a chemical or copies of any health and safety studies conducted or initiated by, or otherwise known by, the chemical company. Chemical industry representatives with whom we spoke told us that they see much merit in working toward a strategy that would give EPA data that could help the agency improve its models. They believe that it is to everyone’s benefit to have approaches that produce models that are useful for identifying both safe and problematic chemicals. This is especially true for enabling industry to make timely decisions--especially for chemicals having short life spans and requiring quick production decisions essential to innovation. These chemical industry representatives also said that a comprehensive strategy for improving models would be particularly beneficial to developing countries lacking extensive experience in manufacturing chemicals because it would enable them to speed their progress toward developing chemicals that are safe and effective. Chemical companies are generally required to submit to EPA, 90 days before beginning to manufacture a new chemical, a premanufacture notice containing information including the chemical’s identity, its production process, categories of uses, estimated production volumes, potential exposure levels and releases, any test data in the possession or control of the chemical company, and a description of any other data concerning the environmental or health effects known to or reasonably ascertainable by the chemical company. EPA bases its exposure estimates for new chemicals on information contained in premanufacture notices. However, the anticipated production volume, uses, exposure levels, and release estimates outlined in the premanufacture notice do not have to be amended once manufacturing begins. That is, once EPA completes its review and production begins, absent any requirement imposed by EPA such as a significant new use rule, chemical companies are not required under TSCA to limit the production of a chemical or its uses to those specified in the premanufacture notice or to submit another premanufacture notice if changes occur. However, the potential risk of injury to human health or the environment may increase when chemical companies increase production levels or expand the uses of a chemical. To address this potential TSCA authorizes EPA to promulgate such a rule specifying that a particular use of a chemical would be a “significant new use.” The manufacturers, importers, and processors of the chemical for that use would then be required to notify EPA at least 90 days before beginning manufacturing or processing the chemical for that use. When EPA’s assessment of new chemicals identifies health and safety problems, EPA can issue a proposed rule to prevent chemical companies from manufacturing or distributing the chemical in commerce, or to otherwise restrict the chemical’s production or use, if the agency believes the new chemical may present an unreasonable risk before EPA can regulate the chemical under section 6 of TSCA. Despite limitations in the information available on new chemicals, EPA’s reviews have resulted in some action being taken to reduce the risks of over 3,500 of the 32,000 new chemicals that chemical companies have submitted for review. These actions ranged from chemical companies voluntarily withdrawing their notices of intent to manufacture new chemicals, chemical companies entering into consent orders with EPA to produce a chemical under specified conditions, and EPA promulgating significant new use rules requiring chemical companies to notify EPA of their intent to manufacture or process certain chemicals for new uses prior to manufacturing or processing the chemicals for such uses. For over 1,600 chemicals, companies withdrew their premanufacture notices, sometimes after EPA officials indicated that the agency planned to initiate the process for placing controls on the chemical, such as requiring testing or prohibiting the production or certain uses of the chemical. EPA officials told us that after EPA screened the chemical or performed a more detailed analysis of it, chemical companies often drop their plans to market a new chemical when the chemical’s niche in the marketplace is uncertain and EPA requests that the company develop and submit test data. According to an EPA official, companies may be uncertain that they will recoup the costs of testing and prefer instead to withdraw their premanufacture notice. For over 1,200 chemicals, EPA has issued orders requiring chemical companies to implement workplace controls or practices during manufacturing pending the development of information, and/or perform toxicity testing when the chemical’s production volumes reached certain levels. EPA may issue these proposed orders to control the production, distribution, use, or disposal of a new chemical when there is insufficient information available to EPA to reasonably evaluate the human health or environmental effects of a chemical and when the chemical (1) may present an unreasonable risk to human health or the environment or (2) it is or will be produced in substantial quantities and (a) it either enters or may reasonably be anticipated to enter the environment in substantial quantities or (b) there is or may be significant or substantial human exposure to the substance. Under section 5 of TSCA, EPA cannot require that chemical companies develop this information, but TSCA authorizes EPA to control the manufacturing and processing of the chemical until EPA has sufficient data to determine if the chemical will pose a risk. For about 570 of the 32,000 new chemicals submitted for review, EPA required chemical companies to submit premanufacture notices for any significant new uses of the chemical, providing EPA the opportunity to review the risks of injury to human health or the environment before new uses had begun. For example, in 2003, EPA promulgated a significant new use rule requiring chemical companies to submit a notice for the manufacture or processing of substituted benzenesulfonic acid salt for any use other than as described in the premanufacture notice. Finally, in 1984, EPA issued proposed rules that were effective upon publication to impose certain controls on four new chemicals the agency determined would pose an unreasonable risk to human health or the environment. The rules—which remain in effect today—prohibit adding any nitrosating agent, including nitrites, to metal working fluids that contain these substances. According to EPA, adding nitrites or other nitrosating agents to the substances causes the formation of a substance known to cause cancer in laboratory animals. See appendix V for more information on the rules issued to control these four chemicals. TSCA authorizes but does not specifically require EPA to review the risks of existing chemicals. Further, EPA cannot require chemical companies to test the safety of existing chemicals and provide the resulting test data to the agency, unless EPA first determines on the basis of risk or production and exposure information that the chemicals warrant such testing. EPA has used its authority to require testing for fewer than 200 of the 62,000 chemicals in commerce when EPA began reviewing chemicals under TSCA in 1979. Furthermore, according to EPA, in part because it is costly and labor-intensive for EPA to require the development of toxicity and exposure data, the agency has performed internal reviews of only an estimated 2 percent of the chemicals that were in the TSCA inventory when EPA began chemical reviews in 1979. Additionally, EPA has rarely banned, limited the production, or restricted the use of existing chemicals. Only five chemical substances or groups of chemical substances have been regulated under section 6, and the last final action EPA took to control existing chemicals under section 6 was published in 1990. Since 1998, EPA has focused its efforts on obtaining information on existing chemicals through voluntary programs, such as the HPV Challenge Program. This program will provide basic data on the characteristics of about 2,800 chemicals produced in excess of 1 million pounds a year. However, while EPA has received recommendations from the NPPTAC on a process for screening these chemicals, the agency has not yet implemented guidelines for reviewing the data so that the chemicals can be prioritized and more detailed information can be obtained to further assess their risks to human health and the environment. Canada and the EU have recently taken action—passing legislation and proposing a new regulation, respectively—to further regulate or assess existing chemicals. When implemented, these actions may require U.S. chemical companies to submit information on some chemicals manufactured or processed in or exported to Canada and the EU. EPA has authority under section 8 of TSCA to require that copies of such data for chemicals manufactured or processed by chemical companies in the United States be made available to EPA. According to EPA officials, EPA’s toxicity and exposure data on existing chemicals is often incomplete and TSCA’s authority to require testing is difficult to use in support of the agency’s review process. While TSCA authorizes the review of existing chemicals, it generally provides no specific requirement, time frame, or methodology for doing so. Instead, EPA conducts initial reviews after it receives information from the public or chemical companies that a chemical may pose a risk. For example, if a chemical company voluntarily tests a chemical or otherwise obtains information about a chemical that reasonably supports the conclusion that the chemical presents a substantial risk to human health or the environment, TSCA requires that the chemical company immediately notify EPA about this information. EPA then reviews the information to determine the need for additional testing or risk management. However, chemical companies are not required to develop and submit toxicity information to EPA unless EPA promulgates a testing rule, thus placing the burden for obtaining or requiring industry development of data on the agency. In addition, if chemical company testing shows that a chemical is not toxic, there is generally no standing requirement that the chemical companies submit this data to EPA. Consequently, when EPA decides to review existing chemicals, it generally has only limited information on the risks of injury the chemicals pose to human health and the environment. Facing difficulties obtaining such information, as noted above, EPA has made little progress in reviewing existing chemicals since EPA began reviewing chemicals under TSCA in 1979. The limited amount of information available to EPA on existing chemicals’ toxicity was illustrated in a 1998 EPA report of publicly available data on 2,863 high-production-volume chemicals produced and/or imported at over 1 million pounds per year in 1990. For each of these chemicals, EPA examined the readily available data corresponding to six basic end points that have been internationally agreed to as necessary for a screening level assessment of a chemical’s toxicity and environmental fate. EPA estimated that only about 7 percent of the 2,863 chemicals had information on all six basic end points, 50 percent had information for one to five of the end points, and 43 percent had no information for any of the end points. According to EPA officials, the agency has access to even less information for chemicals not considered high-production-volume chemicals. Furthermore, EPA has limited information on how existing chemicals are used and how they come into contact with people or the environment. To gather more exposure information, in 2003, EPA amended its TSCA Inventory Update Rule (IUR), which is primarily used to gather certain information on chemicals produced at more than a basic threshold volume in the year reported. Among other things, EPA raised the basic production volume reporting threshold from 10,000 to 25,000 pounds, required chemical companies producing or importing chemicals at a site at or above this threshold to report the number of workers reasonably likely to be exposed to the chemical at each site, and added a reporting threshold of 300,000 pounds per site at or above which chemical companies must report readily obtainable exposure-related use and processing information. Nevertheless, TSCA does provide EPA with the authority to obtain information needed to assess chemicals by issuing rules under section 4 of TSCA requiring chemical companies to test chemicals and submit the test data to EPA. However, because promulgating test rules to obtain test data on chemicals can be time consuming, EPA has negotiated agreements with chemical companies to conduct testing. In 1979, EPA instituted a process to negotiate with chemical companies and reach voluntary agreements to test the safety of certain chemicals. However, in 1984, the United States District Court for the Southern District of New York found that EPA had failed to discharge its obligations under TSCA by negotiating such voluntary agreements instead of initiating rulemaking with respect to chemicals designated for testing by the Interagency Testing Committee (ITC) under section 4(e) of TSCA. The court determined that EPA had made de facto findings that testing of the ITC-designated chemicals was necessary. The court noted that the very negotiation and acceptance of voluntary testing agreements demonstrated EPA’s belief that additional data on the particular chemicals at issue needed to be developed. Upon making such findings, the court stated that it is EPA’s duty under TSCA to make the mandatory choice between initiating rulemaking proceedings or publishing its reasons for not doing so and that EPA had not done this. The court found no support either in TSCA or “on some vague assertion of agency discretion” for EPA’s use of the negotiated testing agreements instead of rulemaking proceedings. The court also found that, in addition to violating the test rule promulgation process set forth in TSCA, EPA’s failure to use the rulemaking process bypassed several other important provisions within the statutory framework of TSCA. The court stated that it was not EPA’s prerogative to “substitute for this intricate framework a number of haphazard and informal purported equivalents” and that negotiated testing programs without rulemaking cannot be sanctioned under TSCA. In order to address the concerns raised by the court, EPA promulgated a rule in 1986, revising its procedures and providing for its current use of enforceable consent agreements, which EPA believes bind the companies signing them to perform the testing they agree to perform. EPA regulations state that when EPA believes testing is necessary, it will explore whether a consent agreement can be negotiated that satisfies those testing needs. The regulations further require EPA to publish a notice in the Federal Register when it decides to initiate negotiations. EPA will meet with manufacturers, processors, and other interested parties (those responding to EPA’s Federal Register notice) to attempt negotiation of a consent agreement. All negotiating meetings are open to the public, and EPA is to prepare meeting minutes and make them—as well as testing proposals, correspondence, and other relevant material—available to the public. When EPA prepares a draft consent agreement, it is circulated for comment to all interested parties, who have 4 weeks to submit comments or written objections. Where consensus exists on the draft consent agreement, as determined under the criteria listed in EPA’s regulations, the draft will be circulated to EPA management and interested parties for final approval and signature. EPA will then publish another Federal Register notice summarizing the consent agreement and listing the name of the chemical to be tested in its regulations. According to EPA, these agreements allow greater flexibility in the design of the testing program because test methods can be negotiated. The relationship between EPA and the chemical industry is typically nonadversarial, and it usually takes less than a year for testing to begin on chemicals subject to enforceable consent agreements. According to EPA, negotiating these agreements is generally less costly and time-consuming than promulgating test rules because EPA does not have to determine that (1) a chemical poses or may pose an unreasonable risk or (2) a significant or substantial potential may exist for human exposure to the chemical. However, chemical companies must be willing to participate in such negotiations. EPA has entered into consent agreements with chemical companies to develop tests for about 60 chemicals. EPA officials told us that, for an additional 250 chemicals, EPA issued formal decisions not to test. In a number of these cases, EPA had initiated the process to either require testing or to negotiate consent agreements but prior to finalizing the rules or agreements chemical companies or other organizations had met EPA’s need for the data. While it appears that EPA’s enforceable consent procedures have been a good mechanism for acquiring needed test data, as the United States District Court for the Southern District of New York noted, “t is not an agency’s prerogative to alter a statutory scheme even if its assertion is as good or better than the congressional one.” In this regard, it is not clear whether EPA’s current use of enforceable consent agreements would fare better than its previous use of voluntary agreements if challenged in court. EPA’s regulations require enforceable consent agreements to address many of the provisions of TSCA triggered by test rules that the court found were lacking in EPA’s earlier voluntary agreements. However, some important differences remain between the TSCA framework for testing rules and EPA’s regulations for enforceable consent agreements. First, the enforceable consent agreement regulations would not account for some of the TSCA provisions that would be triggered by a test rule. For example, the regulations do not require the submission of test data along with the premanufacture notices for new chemicals. The regulations also neither preempt state or local testing rules, as a TSCA test rule would, nor do they have the same reporting and recordkeeping requirements. Second, unlike a testing rule, which would trigger TSCA requirements for all manufacturers and processors of a particular chemical, the consent agreement would generally only trigger such requirements for those manufacturers and processors that sign the agreement. While EPA regulations state that any person exporting or intending to export a chemical that is the subject of an enforceable consent agreement must notify EPA, it is unclear how EPA would enforce this provision if the person had not signed the agreement. Despite EPA’s attempts to incorporate a number of the test rule-triggered TSCA provisions into its enforceable consent agreements, its efforts may still fall short. Like EPA’s earlier use of voluntary agreements, its use of enforceable consent agreements is not explicitly authorized under TSCA, and, if a court determined that EPA’s use of enforceable consent agreements equated to a de facto finding that testing was necessary, a court could again find that EPA lacked discretion to require testing other than through promulgation of a test rule. EPA officials believe that the agency’s revised procedures address the court’s findings, and that, while TSCA does not specifically authorize the use of consent agreements to obtain test data, a sound legal basis exists for invoking TSCA’s enforcement provisions against chemical companies that violate such agreements. Representatives of the American Chemistry Council (ACC) also told us that they have always considered the consent agreements to be enforceable and binding on the chemical companies signing them. Bolstering these views somewhat is the fact that EPA has been using the enforceable consent agreement process since establishing it by rule in 1986—nearly two decades ago. Nevertheless, an EPA legal memorandum states that although EPA could reasonably take the position that it is authorized to enter into enforceable consent agreements requiring testing—ultimately concluding that enforceable consent agreements could be enforced by EPA and would be upheld by the courts—“the matter is not free from doubt.” EPA officials have stated that revising TSCA to explicitly provide authority to enter enforceable consent agreements would be beneficial for clarifying when EPA has authority to enter into such agreements. Chemical industry representatives agreed with EPA that explicit authorization could be useful. Finally, according to EPA, the lack of information on existing chemicals and the relative difficulty in requiring testing under TSCA on such a large scale as would be required for the more than 2,000 chemicals produced at high volumes, has led EPA, in cooperation with chemical companies, environmental groups, and other interested parties, to implement a voluntary program to obtain test data on high-production-volume chemicals from chemical companies. The HPV Challenge Program focuses on obtaining chemical company “sponsors” to voluntarily provide data on the approximately 2,800 chemicals that chemical companies reported in 1990, that they produced at a high volume—generally over 1 million pounds. Through this program, sponsors develop a minimum set of information on the chemicals, either by gathering available data, using models to predict the chemicals’ properties, or conducting testing of the chemicals. EPA plans to use the data collected under the HPV Challenge Program to prioritize high-production chemicals for further assessment. However, EPA has not yet adopted a methodology for prioritizing the chemicals or determining those that require additional information. At EPA’s request in 2005, a federal advisory group has proposed a methodology for prioritizing the HPV Challenge Program chemicals. EPA anticipates implementing the recommendation and beginning screening in early 2006. While EPA will soon be collecting limited exposure information on chemicals produced at or above 25,000 pounds per year, the agency does not regularly collect exposure information on lower volume chemicals. EPA officials stated, based on the success of the HPV Challenge Program, there may be promise in a future effort to develop an appropriate level of information for lower volume chemicals, although given the demands of current efforts by EPA, industry, and others on HPV chemicals, no steps have been taken in this regard. Furthermore, EPA has no voluntary or test rule program in place for obtaining test data on chemicals that are currently produced in low volumes but which may be produced at high volumes in the future. While chemical industry organizations have said that they will voluntarily provide a basic set of test data on certain high-production-volume chemicals that are not part of the HPV Challenge Program, it is unclear that their efforts will produce information sufficient for EPA to make determinations of a chemical’s risk to human health or the environment or provide the information in a timely manner. EPA officials told us that, in cases where chemical companies do not voluntarily provide needed test data and health and safety studies in a complete and timely manner, requiring testing of existing chemicals of concern is the only practical way to ensure that needed information is obtained by the agency. For example, there are currently over 300 high-production-volume chemicals for which chemical companies have not agreed to provide the minimal test data that EPA believes are needed to initially assess their risks. Furthermore, many additional chemicals are likely to be added to this number in the future because the specific chemicals used in commerce are constantly changing, as are their production volumes. Chemical industry representatives told us that TSCA (under section 8) provides EPA with adequate authority to issue rules requiring companies to provide EPA with any test and exposure data possessed by the companies, and that EPA could use such authority to obtain company information on existing chemicals of concern. EPA could then use that information to determine whether additional rules should be issued under section 4 of TSCA to require companies to perform additional testing of the chemicals. However, EPA officials told us that it is time-consuming, costly, and inefficient for the agency to use a two-step process of (1) issuing rules under section 8 of TSCA (which can take months or years to develop) to obtain exposure data or available test date that the chemical industry does not voluntarily provide to EPA and then (2) issuing additional rules under section 4 of TSCA requiring companies to perform specific tests necessary to ensure the safety of the chemicals tested. They also said that EPA’s authority to issue rules requiring chemical companies to conduct tests on existing chemicals under section 4 of TSCA has been difficult to use because of the findings the agency must first make before EPA can require testing. Section 4 of TSCA requires EPA to find that current data is insufficient; testing is necessary; and that either (1) the chemical may present an unreasonable risk or (2) that the chemical is or will be produced in substantial quantities and that there is or may be substantial human or environmental exposure to the chemical. For example, if EPA wanted to issue a test rule on the basis of a chemical’s production volume, it would still need to make the other requisite findings. In this regard, according to EPA officials, obtaining exposure information needed for rulemaking is particularly difficult. To fully assess human exposure to a chemical, EPA needs to know how many workers, consumers and others are exposed; whether the exposure occurs through inhalation or other means, such as skin absorption; and the amount and duration of the exposure. For environmental exposure, EPA needs to know such things as whether the chemical is being released in the air, water or land; how much is being released; and the extent of the area affected. Another important factor in environmental exposure is chemical fate, that is, how the chemical acts and is ultimately disposed of in the environment. EPA must rely on its estimates for most of this information because actual measurements of exposure in the environment, workplace, and home, for the thousands of chemicals in use are not practicable because of the monitoring equipment and staff resources that would be required. Once EPA has made the required findings, the agency can issue a proposed rule for public comment, consider the comments it receives, and promulgate a final rule ordering chemical testing. EPA officials told us that finalizing rules under section 4 of TSCA can take from 2 to 10 years and require the expenditure of substantial resources. Given the time and resources required, the agency has issued rules requiring testing for only 185 of the approximately 82,000 chemicals in the TSCA inventory. Because EPA has used section 4 so sparingly, it has not continued to maintain information on the cost of implementing test rules. However, in our October 1994 report on TSCA, we noted that EPA officials told us that issuing a rule under section 4 can cost between about $68,500 and $234,000. Given the difficulties involved in requiring testing, EPA officials do not believe that TSCA’s authorities under section 4 provide an effective means for testing a large number of chemicals. They believe that EPA could review substantially more chemicals in less time if they had authority to require chemical companies to conduct testing and provide test data on chemicals once they reach a substantial production volume, assuming EPA has also determined that testing is necessary in order to obtain these data. Even when EPA has toxicity and exposure information on existing chemicals, the agency stated that it has had difficulty demonstrating that harmful chemicals pose an unreasonable risk and that they should be banned or have limits placed on their production or use. Since the Congress enacted TSCA in 1976, EPA has issued regulations under the act to ban or limit the production or restrict the use of five existing chemicals or chemical classes. The five chemicals or chemical classes are polychlorinated biphenyls (PCB), fully halogenated chlorofluoroalkanes, dioxin, asbestos, and hexavalent chromium. (See app. V for additional information on these five chemicals). In addition, for 160 existing chemicals, EPA has required chemical companies to submit notices of any significant new uses of the chemical, providing EPA the opportunity to review the risks posed by the new use. In order to regulate an existing chemical under section 6(a) of TSCA, EPA must find that there is a reasonable basis to conclude that the chemical presents or will present an unreasonable risk of injury to health or the environment. Before regulating a chemical, the EPA Administrator must consider and publish a statement regarding the effects of the chemical on human health and the magnitude of human exposure to the chemical; the effects of the chemical on the environment and the magnitude of the environment’s exposure to the chemical; the benefits of the chemical for various uses and the availability of substitutes for those uses; and the reasonably ascertainable economic consequences of the rule, after consideration of the effect on the national economy, small business, technological innovation, the environment, and public health. Further, the regulation must apply the least burdensome requirement that will adequately protect against such risk. For example, if EPA finds that it can adequately manage the unreasonable risk of a chemical through requiring chemical companies to place warning labels on the chemical, EPA could not ban or otherwise restrict the use of that chemical. Additionally, if the EPA Administrator determines that a risk of injury to health or the environment could be eliminated or sufficiently reduced by actions under another federal law, then TSCA prohibits EPA from promulgating a rule under section 6(a) of TSCA, unless EPA finds that it is in the public interest considering all aspects of the risk, the estimated costs of compliance, and the relative efficiency of such action to protect against risk of injury. According to EPA, it has found it difficult to meet all of these requirements for rulemaking. Finally, EPA must also develop substantial evidence in the rulemaking record in order to withstand judicial review. Under TSCA, a court reviewing a TSCA rule “shall hold unlawful and set aside…if the court finds that the rule is not supported by substantial evidence in the rulemaking record.” According to EPA officials, the economic costs of regulating a chemical are usually more easily documented than the risks of the chemical or the benefits associated with controlling those risks, and it is difficult to show by substantial evidence that EPA is promulgating the least burdensome requirement. EPA’s 1989 asbestos rule illustrates the evidentiary requirements that TSCA places on EPA to control existing chemicals. In 1979, EPA began exploring rulemaking under TSCA to reduce the risks posed by exposure to asbestos. Based upon its review of over 100 studies of the health risks of asbestos as well as public comments on the proposed rule, EPA concluded that asbestos was a potential carcinogen at all levels of exposure. In 1989, EPA promulgated a rule under TSCA section 6 prohibiting the future manufacture, importation, processing, and distribution of asbestos in almost all products. Some manufacturers of asbestos products filed suit against EPA, arguing, in part, that the rule was not promulgated on the basis of substantial evidence regarding unreasonable risk. In October 1991, the U.S. Court of Appeals for the Fifth Circuit agreed with the chemical companies, concluding that EPA had failed to muster substantial evidence to justify its asbestos ban and returning parts of the rule to EPA for reconsideration. In its ruling, the court concluded that EPA did not present sufficient evidence to justify the ban on asbestos because it did not consider all necessary evidence and failed to show that the control action it chose was the least burdensome regulation required to adequately protect human health or the environment. EPA had not calculated the risk levels for intermediate levels of regulation, as it believed there was no asbestos exposure level for which the risk of injury or death was zero. As articulated by the court, the proper course of action for EPA, after an initial showing of product danger, would have been to consider each regulatory option, beginning with the least burdensome, and the costs and benefits of each option. The court further criticized EPA’s ban of products for which no substitutes were currently available stating that, in such cases, EPA “bears a tough burden” to demonstrate, as TSCA requires, that a ban is the least burdensome alternative. Since the court’s 1989 decision, EPA has only exercised its authority to ban or limit the production or use of an existing chemical once (for hexavalent chromium). However, EPA officials said that they had started the process for promulgating the rule for hexavalent chromium years prior to the asbestos decision. As the court noted, TSCA is not a zero-risk statute. EPA generally is required to choose the least burdensome regulatory action and the Congress has indicated its intent that EPA carry out TSCA “in a reasonable and prudent manner the environmental, economic, and social impact of any action.” While concerns about the potential economic and social impacts of EPA’s regulations are legitimate, according to EPA officials, requiring EPA to satisfy before taking regulatory action that the regulation uses the least burdensome approach to mitigate unreasonable risks and that its rulemaking is supported by substantial evidence has proven difficult for EPA to meet. Canada and the EU have recently taken action to prioritize and review existing chemicals. The Canadian legislation (CEPA), enacted in 1999, requires the Minister of the Environment and the Minister of Health to compile, and from time to time amend, a Priority Substances List specifying those substances that the ministers believe should be given priority for assessing whether they are toxic or capable of becoming toxic. Within 7 years of the act, the ministers are to categorize existing chemicals for the purpose of identifying substances that, in their opinion, and on the basis of available information, (1) may present to individuals in Canada the greatest potential for exposure or (2) are persistent or bioaccumulative in accordance with the regulations, and inherently toxic to human beings or to nonhuman organisms, as determined by laboratory or other studies. The ministers shall then conduct screening assessments for such chemicals. The EU is currently considering a proposed regulation that, among other things, would require chemical companies to register and submit information on chemicals produced or imported in volumes of 1 metric ton or more per year, and would require submission of a chemical safety report documenting an assessment of chemicals manufactured or processed in quantities of 10 metric tons or more per year. Under CEPA and the proposed EU regulation, U.S. chemical companies may be required to provide information on some existing chemicals that are manufactured or processed in, or exported to, Canada and the EU. Under current EPA regulations, these U.S. chemical companies generally would not be required to submit the same information to EPA, although section 8 of TSCA provides the EPA Administrator authority to promulgate rules requiring chemical companies to submit such existing information on chemicals manufactured in or imported into the United States. While EPA officials told us that they are aware of the agency’s authority to require the submission of at least some of the types of information that U.S. chemical companies may be required to submit to Canada and the EU, they have not decided whether or when to use such authority. For example, these officials said that while the concept of obtaining copies of the information that U.S. chemical companies submit to foreign countries has merit, they might be able to obtain the information through voluntary arrangements with the foreign governments. Furthermore, EPA officials told us that any requirement for chemical companies to provide EPA a copy of the information they submit to Canada and the EU would have to meet the requirements under the Paperwork Reduction Act of 1995. Under this act, federal agencies must, among other things, conduct a review of the proposed information collection and obtain Office of Management and Budget approval before requesting most types of information from the public. EPA officials acknowledged that exchanging information through voluntary arrangements with foreign governments would have limitations, such as EPA’s inability to provide other countries with confidential business information. EPA officials also acknowledged that requiring copies of the submissions directly from the companies would produce a substantial amount of information that EPA could use to improve its models for assessing and predicting chemical risks. They told us that, given the recency of the Canadian chemical control changes and the pending nature of the EU regulation, EPA has not assessed all options or decided on a preferred approach for obtaining the data that U.S. chemical companies may be required to submit to foreign governments. EPA officials told us that the agency does not currently have a strategy or milestones for identifying resource needs and making decisions regarding future agency efforts to obtain such data. Chemical industry representatives told us that the industry would have no objections to EPA using its authority to require that chemical companies submit to EPA the same information that they provide to Canada, the EU, or other foreign governments. They indicated that few additional costs would be incurred by providing this information, but that companies could face additional burdens depending on the specific requirements governing the submission of data. For example, it would be easier for the chemical companies to provide the information periodically, such as annually, rather than concurrently along with the submissions to foreign governments. EPA’s ability to make publicly available the information that it collects under TSCA is limited. Chemical companies may claim some of the information they provide to EPA under TSCA as confidential business information. EPA is required under the act to protect trade secrets and privileged or confidential commercial or financial information against unauthorized disclosures, and this information generally cannot be shared with others, such as state health and environmental officials and foreign governments. However, some state officials believe this information would be useful for informing and managing their environmental risk programs. While EPA believes that some claims of confidential business information may be unwarranted, challenging the claims is resource-intensive. Lacking the resources needed to challenge claims on a wide basis, EPA identified several possible changes aimed at discouraging the submission of unwarranted claims of confidential business information under TSCA, but few were adopted. When companies submit information to EPA through premanufacture notices, many claim a large portion of the information as confidential. According to EPA, about 95 percent of premanufacture notices contain some information that chemical companies claim as confidential. Under EPA regulations, information that is claimed as confidential shall generally be treated as such if no statute specifically requires disclosure. Exceptions include if the information is required to be released by some other federal law or order of a court, if the company submitter voluntarily withdraws its confidential claim, or if the EPA Office of General Counsel makes a final administrative determination that the information does not meet the regulatory criteria substantiating a legal right to the claim. Officials who have various responsibilities for protecting public health and the environment from the dangers posed by chemicals believe that having access to confidential TSCA information would allow them to examine information on chemical properties and processes that they currently do not possess and could enable them to better control potential risks from harmful chemicals. For example, on the basis of a study performed by the state of Illinois with the cooperation of chemical companies and EPA, Illinois regulators found that toxicity information submitted under TSCA was useful in identifying chemical substances that should be included in contingency plans in order to alert emergency response and planning personnel to the presence of highly toxic substances at facilities. Additionally, the availability of this information could assist the states with environmental monitoring and enforcement. For instance, using TSCA data, Illinois regulators identified potential violations of state environmental regulations, such as cases where companies had submitted information to EPA under TSCA but failed to submit such information to the states as required. Likewise, the general public may also find information provided under TSCA useful. Individual citizens or community groups may have a specific interest in information on the risks of chemicals that are produced or used in nearby facilities. For example, neighborhood organizations can use such information to engage in dialogues with chemical companies about reducing chemical risks, preventing accidents, and limiting chemical exposures. EPA has not performed any recent studies of the appropriateness of confidentiality claims, although a 1992 EPA study indicated that problems with inappropriate claims were extensive. This study examined the extent to which companies made confidential business information claims, the validity of the claims, and the impact of inappropriate claims on the usefulness of TSCA data to the public. While EPA may suspect that some chemical companies’ confidentiality claims are unwarranted, they have no data on the number of inappropriate claims. EPA officials also told us that the agency does not have the resources that would be needed to investigate and, as appropriate, challenge claims to determine the number that are inappropriate. Consequently, EPA focuses on investigating primarily those claims that it believes may be both inappropriate and among the most potentially important—that is, claims relating to health and safety studies performed by the chemical companies involving chemicals currently used in commerce. The EPA official responsible for initiating challenges to confidentiality claims told us that EPA challenges about 14 such claims each year, and that the chemical companies withdraw nearly all of the claims challenged. During the early 1990s, the EPA Office of General Counsel led an agency wide review of EPA’s confidential business information regulations, but this review did not lead to substantial changes. Subsequent to this effort, EPA developed a plan involving various voluntary and regulatory measures to reduce industry’s use of TSCA confidentiality claims. These measures included exploring ways to make confidential information available to states, having senior corporate officials certify that the information claimed as confidential meets applicable statutory and regulatory requirements, and requiring companies to reassert their claims at a future date when confidentiality may no longer be necessary. While most of these changes were not implemented, EPA officials said they did make some changes to TSCA confidential business information regulations as a result of this review such as requiring up-front substantiation requirements for claiming plant site identity as confidential. EPA serves as an intermediary between chemical companies and state agencies that wish to have access to TSCA confidential information and, according to EPA, in recent years, state agencies have not been very aggressive in requesting such information. EPA believes, based on informal discussions with state officials, that obtaining such information may no longer be a high priority of the states, although the agency has not fully analyzed this issue. In addition, EPA officials said that chemical companies had expressed concerns about the costs of changing confidentiality procedures and have suggested that providing this information to states could increase the risk that some confidential information could be revealed to competitors. However, as noted previously, chemical industry representatives told us that chemical companies would not object to revising TSCA to enable states to obtain access to the confidential business information that companies provide to EPA—provided that adequate safeguards exist to ensure that the information would be used only for legitimate reasons and would be protected from inappropriate disclosures. EPA would need to ensure that the states receiving confidential information have policies and procedures similar to those that EPA uses to protect confidential information from improper disclosures. For example, when EPA provides confidential TSCA information to other federal agencies as permitted under the act, EPA ensures that the agencies have policies and procedures for protecting the information. In this regard, among other things, the agencies provide security briefings to those handling the confidential information, take steps to prevent the information from being stored on electronic systems open to the Internet, and require that such information is kept locked away when not in use. Chemical company representatives also told us that, in principle, they have no concerns about revising TSCA or EPA regulations to require that confidentiality claims be reasserted at a future date. They said that chemical companies make bona fide claims at the time the information is submitted to EPA, but this information may not need to be kept confidential after a certain date because confidentiality may no longer be necessary in order to protect trade secrets. However, EPA has no mechanism for determining when information no longer needs to be protected as confidential. Chemical company representatives said that companies sometimes choose to inform EPA that the information is no longer confidential, but neither TSCA nor EPA regulations require them to do so. Chemical industry representatives said that a requirement to reassert claims of confidentially at some later date would not be disruptive to the industry if the effective date of the requirement occurred after a considerable period had passed, such as 5 years or more after the information was initially claimed as confidential. While TSCA allows EPA to require the testing of existing chemicals through the rulemaking process, EPA has found it difficult and costly to make the findings necessary to promulgate rules, including findings that a chemical may pose unreasonable risks or that the chemical will be produced in substantial quantities, and that there is or may be substantial human or environmental exposure to the chemical. Consequently, to obtain the test information needed on existing chemicals, EPA relies extensively on the chemical industry to perform specific tests of certain chemicals under (1) consent agreements negotiated with chemical companies and (2) voluntary industry efforts under the HPV Challenge Program. Although the agency believes that the negotiated agreements are enforceable and consistent with EPA's authority under TSCA section 4, the enforceable consent agreements have never been tested in court, and EPA believes that explicit reference to the agreements in TSCA would be beneficial. Chemical companies have begun voluntarily providing some test data that EPA needs to assess chemical risks through the HPV program. However, in cases where the industry does not agree to voluntarily perform testing in an adequate and timely manner, EPA believes that requiring such testing is the only practical way to ensure that testing is performed. In this regard, while the chemical industry believes that EPA can use its existing authority under TSCA to promulgate testing rules and require testing as needed on a case-by-case basis, EPA notes its relative lack of experience in promulgating large multichemical test rules and that the testing authorities may prove difficult to implement on a large number of chemicals. For example, EPA has pointed out that, despite notable voluntary efforts regarding high-production-volume chemicals, (1) chemical companies have not agreed to test 300 chemicals identified by EPA as high-production-volume chemicals, (2) additional chemicals will become high-production chemicals in the constantly changing commercial chemical marketplace, and (3) chemicals without a particularly high-production volume may also warrant testing based on their toxicity and the nature of exposure to them. Furthermore, although the chemical industry may be willing to take action even before EPA has the evidence required for rulemaking under TSCA, the industry is nonetheless large and diverse, and it is uncertain that all companies will always take action voluntarily. While the protection of confidential business information is obviously a legitimate concern, TSCA currently prohibits EPA from disclosing much of this data for useful and important purposes such as providing complete information to state environmental management agencies and assisting international efforts to develop and validate, for regulatory purposes, SAR models or to harmonize chemical assessment approaches by sharing information with foreign governments—a goal generally shared by government and industry. Both EPA and the chemical industry believe that revising TSCA to allow the sharing of such information would be beneficial and appropriate provided that EPA ensures that recipients have in place policies and procedures designed to prevent inappropriate disclosures of the information. In addition, EPA and the chemical industry agree that the need to protect industry data often diminishes over time, and thus it would be appropriate to revise TSCA regulations to require companies to periodically reassert the confidentiality of business information. Largely because of limitations in the amounts and types of test data provided with new chemical notifications, over the past decades EPA has moved toward innovative approaches to assessing new chemicals and to obtaining test data needed to assess chemicals. Most notably, these approaches include the development and extensive use of models to assess new chemicals and voluntary chemical testing approaches to obtain test data needed to assess some existing chemicals. While of many of EPA’s models have not been validated for regulatory purposes, EPA believes that they are useful screening tools that have supported EPA’s actions to control the production or use of about 3,500 of the more than 32,000 new chemicals reviewed under TSCA. Nonetheless, EPA recognizes that, given the central role that these models play in the chemical review process, the agency needs a multifaceted strategy for improving the models, which includes obtaining additional information on chemical properties necessary to further develop and validate the models for regulatory purposes. Likewise, EPA is encouraged by the early results of the HPV voluntary chemical testing program for existing chemicals, which has already produced substantial amounts of basic test data. The agency has moved toward, but has not yet implemented, a methodology necessary for using the data to prioritize chemicals for further review and identify the specific additional data needed to determine whether and what controls should be placed on their production or use. The impact of EPA’s programs could be substantially enhanced as a result of additional information that companies may be required to provide to Canada and the EU. By promulgating a rule requiring U.S. companies and their subsidiaries to submit to EPA the same information that they submit to foreign governments, the agency could acquire substantial additional basic test data and health and safety studies, at little, if any, additional cost to the chemical companies. To improve EPA’s ability to assess the health and environmental risks of chemicals, the Congress should consider amending TSCA to provide explicit authority for EPA to enter into enforceable consent agreements under which chemical companies are required to conduct testing; give EPA, in addition to its current authorities under section 4 of TSCA, the authority to require chemical substance manufacturers and processors to develop test data based on substantial production volume and the necessity for testing; and authorize EPA to share with the states and foreign governments the confidential business information that chemical companies provide to EPA, subject to regulations to be established by EPA in consultation with the chemical industry and other interested parties, that would set forth the procedures to be followed by all recipients of the information in order to protect the information from unauthorized disclosures. To improve EPA’s management of its chemical review program, we recommend the EPA Administrator develop and implement a methodology for using information collected through the HPV Challenge Program to prioritize chemicals for further review and to identify and obtain additional information needed to assess their risks; promulgate a rule under section 8 of TSCA requiring chemical companies to submit to EPA copies of any health and safety studies, as well as other information concerning the environmental and health effects of chemicals, that they submit to foreign governments on chemicals that the companies manufacture or process in, or import to, the United States; develop a strategy for improving and validating, for regulatory purposes, the models that EPA uses to assess and predict the risks of chemicals and to inform regulatory decisions on the production, use, and disposal of the chemicals; and revise its regulations to require that companies reassert claims of confidentiality submitted to EPA under TSCA within a certain time period after the information is initially claimed as confidential. We provided EPA a draft of this report for its review and comment. EPA did not disagree with the report’s findings and recommendations. EPA, however, offered two substantive comments. Regarding our recommendation to the Administrator to promulgate a Section 8 rule to obtain data submitted by chemical manufacturers to foreign governments, EPA commented that, while such a reporting rule may bring useful information, other targeted approaches for collecting information which are directed at EPA’s domestic priorities, rather than foreign government mandates, may be more prudent. We believe that having access to the information submitted to foreign governments would provide EPA with an important source of information that would be useful for assessing the risks of existing chemicals and improving the models that EPA uses to assess new chemicals. EPA could tailor this rule more narrowly, however, if it saw good reason to do so, such as to avoid duplication of information it already possesses. Regarding the matter for Congressional consideration that Congress consider amending TSCA to explicitly recognize enforceable consent agreements, EPA stated that it believes that there is currently strong legal authority for these agreements. As we noted in our report, TSCA does not explicitly authorize EPA to enter into these agreements and a court could find that EPA lacked discretion to require testing other than through promulgation of a test rule. EPA’s comments are reproduced in appendix VI. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the congressional committees with jurisdiction over EPA and its activities; the Administrator, EPA; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-6225 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. The Environmental Protection Agency (EPA) has initiated voluntary programs to help gather data to assess chemical risks and to promote the use of more environmentally safe chemicals. The following information does not offer an exhaustive account of EPA’s voluntary programs but rather a discussion of three specific programs that are designed to complement EPA’s efforts to assess and control chemicals under the Toxic Substances Control Act (TSCA) and to encourage pollution prevention under the Pollution Prevention Act (PPA). In response to several studies that showed that there were relatively few U.S. High-Production-Volume (HPV) chemicals for which an internationally agreed upon set of hazard screening data was available to the public, EPA, in cooperation with industry, environmental groups, and other interested parties, officially launched the HPV Challenge Program in late 1998. The program was created to ensure that a baseline set of data on approximately 2,800 high-production-volume-chemicals would be made available to the public. HPV chemicals are manufactured or imported in amounts equal to or greater than 1 million pounds per year and were identified for this program through data reported under TSCA Inventory Update Rule (IUR). Under the HPV Challenge Program, EPA invited chemical companies to voluntarily sponsor the approximately 2,800 chemicals. As part of their commitment to the HPV Challenge Program, sponsors submit data summaries of existing information along with a test plan that proposes a strategy to fill data gaps for either individual chemicals or for a category of chemicals. Sponsors could fill data gaps by (1) using existing scientifically adequate data, (2) using an estimation technique such as Structured Activity Analyses (SAR), or (3) proposing new testing. Testing will only be conducted when there are inadequate existing data or when other approaches, such as SAR, are not adequate to meet the need. EPA requested that companies perform a self-assessment on the quality of information they are providing to EPA. EPA officials believe that the early results of the HPV Challenge Program are promising. Nonetheless, several problems remain. While chemical companies collectively have agreed to sponsor, or provide data for, most of the chemicals that are produced at a high-production-volume, about 300 chemicals, called, “orphans,” have not been sponsored by any chemical company. EPA has issued a proposed rule under section 4 of TSCA requiring chemical companies to conduct tests on and provide data for 37 orphan chemicals in 2000, but has not yet finalized these rules. According to EPA officials, due in part to the difficulty and cost in developing and issuing such rules, EPA has not determined how to proceed on obtaining data on the remaining orphan chemicals. EPA officials do not know if they can make the findings necessary to issue test rules for the additional unsponsored chemicals. In addition, since 1990, other chemicals are produced at or above the high-production-volume threshold. Although EPA has not developed a plan to address these new HPV chemicals, several chemical associations have announced a joint initiative to extend industry’s work to chemicals that meet the HPV threshold as of 2002 and to provide use and exposure information for chemicals sponsored through EPA’s and industry’s programs. Finally, while the HPV Challenge Program looks promising in that, if successful, it will provide EPA and the public with information not previously available on the properties of chemicals produced at large volumes in the United States, this program may not provide enough information for EPA to use in making risk assessment decisions. While the data in the HPV Challenge Program may help EPA prioritize chemicals of concern, the data may not present sufficient evidence for EPA to determine whether a reasonable basis exists to conclude that the chemical presents an unreasonable risk of injury to health or the environment and that regulatory action is necessary. The Voluntary Children’s Chemical Evaluation Program (VCCEP) is a pilot program developed by EPA to ensure that there is adequate publicly available toxicity and exposure information to assess the potential risks to children posed by 23 specific chemicals. The pilot VCCEP was announced in a Federal Register notice in December 2000. EPA is running a pilot of the VCCEP in order to gain insight into how best to design and implement the program in order to effectively provide the agency and the public with the means to understand the potential health risk to children associated with certain chemical exposures. EPA intends the pilot to be the means of identifying efficiencies that can be implemented in future VCCEPs. EPA asked companies that produce and/or import 23 specific chemicals to volunteer to sponsor their chemical in the first phase of a pilot of the VCCEP. Chemical companies have volunteered to sponsor 20 of the 23 chemicals in the VCCEP. Chemical companies volunteering to sponsor a chemical under the program make chemical-specific public commitments to make certain hazard, exposure, and risk assessment data and analyses publicly available. EPA is pursuing a three-tiered approach for gathering information, with Tier 3 conducting more detailed toxicology and exposure studies than Tier 2, and Tier 2 conducting more detailed toxicology and exposure studies than Tier 1. After the submission of Tier 1 information and its review by a peer consultation group consisting of scientific experts with extensive and broad experience in toxicity testing and exposure evaluations, EPA reviews the sponsor’s assessment and develops a response focusing primarily on whether any additional information is needed to adequately evaluate the potential risks to children. If additional information is needed to assess a chemical’s risk to children, EPA will indicate what information should be provided in Tier 2. Companies will then be given an opportunity to sponsor chemicals at Tier 2. EPA plans to repeat this process for determining if Tier 3 information is needed. Information from all three tiers may not always be necessary to adequately evaluate the risk to children. According to EPA officials, since the program’s inception, sponsors have submitted six assessments on chemicals to EPA and the consultation group. EPA officials believe that they will collect Tier I data for all 20 sponsored chemicals within the next 4 to 5 years. According to EPA officials, as of December 2004, three assessments are in the peer consultation stage, and industry has indicated that three or four assessments will be ready for peer consultation in 2005. Although EPA has not currently assessed the effectiveness of VCCEP, it plans to have an interim evaluation in 2005, and a final evaluation in 2007. In December 2002, EPA announced the Sustainable Futures Program, a voluntary program designed to help industry develop new chemicals that are sustainable economically and environmentally. Industry participants in the program are offered (1) hands on training on some of EPA’s chemical risk screening models, (2) regulatory relief in the form of expedited review, (3) small business assistance, (4) technical assistance, and (5) public recognition. In Sustainable Futures, EPA has sought to reduce the likelihood of harmful new chemicals entering into commerce by making its screening tools available to chemical companies. EPA provides companies training for and access to the same chemical risk screening models that EPA uses in screening and evaluating the risks of new chemicals. Use of these tools may enhance companies’ ability to identify concerns and halt or redirect work on a potentially risky chemical early in the research and development phase. This approach can save a company the resources it might otherwise invest in a chemical that ultimately may encounter problems during EPA’s review process for new chemicals. By getting early feedback on the potential hazards of a new chemical, a company can reduce regulatory uncertainty, lower development and production costs, and make production decisions that consider a broader array of factors other than the potential profitability of a new chemical. Additionally, by using these screening tools, companies may choose not to produce chemicals that could be regulated by EPA, thus, potentially reducing EPA's regulatory burden. Canada and the European Union (EU) have inventories of chemicals already in the marketplace and require chemical companies to notify regulators about the manufacture or importation of new chemicals. Officials we spoke with identified several notable aspects of the Canadian and EU chemical legislation that differ from the Toxic Substances Control Act (TSCA). First, in the EU, chemical companies must notify regulators prior to marketing new chemicals, which is after production has already begun. Second, Canadian law requires chemical companies to conduct testing of new chemicals based on production or import volume, while EU legislation requires testing based on marketed volume. Finally, the EU is considering changes to its basic chemical legislation that would require chemical companies to submit testing information on existing, as well as new, chemicals. A chart generally describing some of the provisions of TSCA and chemical control legislation in the EU and Canada, along with the proposed EU Registration, Evaluation and Authorization of Chemicals (REACH) regulation, is provided in table 2. Canadian Environmental Protection Act (CEPA) regulations and EU legislation require chemical companies to submit certain test data on new chemicals before they enter commerce. Canada defines new chemicals as those chemicals that are not on Canada’s Domestic Substances List—a list of all known substances that were in commercial use in Canada between January 1, 1984, and December 31, 1986, were manufactured in or imported into Canada by any person in a quantity of 100 kilograms or more in any calendar year during that period, or that have subsequently been fully notified and assessed under CEPA. Under CEPA regulations, chemical companies must submit certain information and test data to the government when production or importation volumes reach specified levels. The information required for new chemicals differs depending on whether the new chemical is listed on the Non-Domestic Substances List— a list that is based on the TSCA Chemical Substances Inventory. Chemicals that are on the Non-Domestic Substances List are subject to notification requirements at higher volume thresholds than are applicable to other new chemicals and are exempt from certain information submission requirements. In addition, the requirements to submit test data for low volume chemicals are less extensive and complex than those for high volume chemicals. According to Canadian officials, a new chemical is generally not added to the existing chemical inventory until a certain level of production or import has been reached, and specified testing for that level has been performed without conditions being placed on the chemical’s manufacture or import. The EU currently maintains a separate inventory for new chemicals, which are subject to additional testing and review before they are marketed in volumes starting at 10 kilograms. Existing chemicals are not subject to the same testing requirements. However, under the proposed EU REACH chemical regulation, according to officials, this distinction between new and existing chemicals would largely be eliminated. All chemical companies would generally be required to register substances they produce or import in volumes of 1 metric ton or more per year. REACH would require chemical companies to gather and submit information on the properties of their substances and where necessary perform tests to generate health and safety data. For all substances subject to registration manufactured or imported by the registrant in quantities of 10 metric tons or more per year, REACH would require submission of a chemical safety report, documenting a chemical safety assessment including, among other things, human health and environmental health hazard assessments. Substances would not be allowed to be manufactured or imported in the European community unless they met the registration requirements. Thus, according to EU officials, REACH would reverse the burden of proof that is now placed on public authorities to manage the risks and uses of particular existing chemicals. CEPA and EU legislation allow chemical companies to make confidentiality claims. However, according to officials we spoke with, these countries place some greater restrictions than TSCA does on the types of data that may be claimed as confidential. In Canada, information that companies request be treated as confidential is not to be disclosed except in certain circumstances. The Minister of the Environment may disclose certain information upon giving 24 hours notice to the company, if (a) the disclosure is in the interest of public health, public safety or the protection of the environment and (b) the public interest in the disclosure (1) outweighs in importance any material financial loss or prejudice to the competitive position of the person who provided the information or on whose behalf it was provided and (2) any damage to the privacy, reputation or human dignity of any individual that may result from disclosure. However, CEPA maintains certain protections for information protected under Canada’s Privacy Act, Access to Information Act, and Hazardous Materials Information Review Act. EU legislation also allows chemical companies to make confidentiality claims. However, according to an EU official we spoke with, the EU places some greater restrictions on the types of data that may be claimed as confidential than TSCA does. In the EU, a company may indicate that information is commercially sensitive and that disclosure may be harmful to the company industrially and commercially and, therefore, that the company wishes to keep the information secret from all persons other than the competent authorities and the European Commission. Secrecy, however, shall not apply to the trade name of the substance, certain physicochemical data concerning the substance, possible ways of rendering the substance harmless, the interpretation of the toxicological and ecotoxicological tests and the name of the body responsible for the tests, and certain recommended methods and precautions and emergency measures. The authority receiving the information is to decide on its own responsibility what information is covered by commercial and industrial secrecy. The company can go to court and appeal the authority’s decision. Under REACH, as currently proposed, one of the objectives of the new system for the management of industrial chemicals would be to make information on chemicals more widely available. Whenever a request for access to documents held by the proposed European Chemicals Agency is made, the agency would be required to inform the registrant of the chemical or other party concerned of the request. That party would have 30 days to submit a declaration identifying information considered to be commercially sensitive and disclosure of which might harm the party commercially that the party wishes to be kept confidential. The agency would consider the information and decide whether to accept the declaration. The party could appeal this decision. The following information would be among the types of information that would not be treated as confidential: the trade name(s) of the substance; physicochemical data concerning the substance and on pathways and environmental fate, the result of each toxicological and ecotoxicological study, if essential to classification and labeling, the degree of purity of the substance and the identity of impurities and/or additives which are known to be dangerous, guidance on safe use, and information contained in the safety data sheet (except for the name of the company or otherwise accepted as confidential in REACH). The following information would be treated as confidential, even if the company did not claim it as confidential: details of the full composition of a preparation, the precise use, function, or application of a substance or preparation, the precise tonnage of the substance or preparation manufactured or placed on the market, and links between a manufacturer or importer and his downstream users. However, in exceptional cases where there are immediate risks to human health, safety or the environment, REACH would authorize the proposed European Chemicals Agency to disclose this information. As requested, we identified a number of options that could strengthen the Environmental Protection Agency’s (EPA) ability under the Toxic Substances Control Act (TSCA) to assess chemicals and control those found to be harmful. These options are those that we previously identified in an earlier GAO report on ways to make TSCA more effective. Representatives of environmental organizations and subject matter experts subsequently concurred with a number of these options and commented on them in congressional testimony. These options are not meant to be comprehensive but illustrate actions that the Congress could take to strengthen EPA’s ability to regulate chemicals under TSCA. The Congress could amend TSCA to reduce the evidentiary burden that EPA must meet to take regulatory action under the act by (1) amending the unreasonable risk standard that EPA must meet to regulate existing chemicals under section 6 of TSCA, (2) amending the standard for judicial review that currently requires a court to hold a TSCA rule unlawful and set it aside unless it is supported by substantial evidence in the rulemaking record, or (3) amending the requirement that EPA must choose the least burdensome regulatory requirement. Currently, under TSCA section 6, EPA may only regulate existing chemicals if it finds that there is a reasonable basis to conclude that the chemical “presents or will present an unreasonable risk of injury to health or the environment.” Several options are available to amend this standard. For example: The Congress could authorize EPA to regulate existing chemicals when it identifies “significant,” rather than “unreasonable,” risks of injury to health or the environment. “Significant risk” is the standard under TSCA section 4(f) by which EPA is to identify chemicals for priority review. EPA officials view the term “significant risk” as a very high threshold for action. However, they believe that demonstrating significant risk would be less demanding than demonstrating unreasonable risk. While “significant risk” implies a finding that the risks are substantial or serious, EPA believes that a finding of “unreasonable” risk requires an extensive cost-benefit analysis. When reviewing EPA’s asbestos rule, the United States Court of Appeals for the Fifth Circuit stated that in evaluating what risks are unreasonable EPA must consider the costs of any proposed actions; moreover, the court noted that TSCA’s requirement that EPA impose the least burdensome regulation reinforces the view that EPA must balance the costs of its regulations against their benefits. The Congress could amend TSCA to require that EPA demonstrate that a chemical “may present” an unreasonable risk, rather than requiring a demonstration that a chemical “presents or will present” an unreasonable risk. Such a change would still require EPA to develop documentation of evidence supporting its assessment, although to a lesser extent than is currently required under TSCA. In addition, TSCA currently requires a court to hold unlawful and set aside a TSCA rule if it finds that the rule is not supported by substantial evidence in the rulemaking record. As several courts have noted, the substantial evidence standard is more rigorous than the arbitrary and capricious standard normally applied to rulemaking under the Administrative Procedure Act. The Congress could amend the standard for judicial review to instead reflect a rational basis test to prevent arbitrary and capricious administrative decisions. Finally, TSCA currently requires that EPA choose the least burdensome requirement when regulating existing chemicals. As we noted earlier, in its ruling that EPA had failed to muster substantial evidence to justify its asbestos ban, the United States. Court of Appeals for the Fifth Circuit concluded that EPA did not present sufficient evidence to justify the ban on asbestos because it did not consider all necessary evidence and failed to show that the control action it chose was the least burdensome regulation required to adequately protect human health or the environment. EPA had not calculated the risk levels for intermediate levels of regulation, as it believed there was no asbestos exposure level for which the risk of injury or death was zero. As articulated by the court, the proper course of action for EPA, after an initial showing of product danger, would have been to consider each regulatory option, beginning with the least burdensome, and the costs and benefits of each option. Congressional testimony has indicated that, under this court decision, the process “is not merely onerous; it may well be impossible.” The Congress could amend or repeal this requirement. TSCA could be revised to require companies to test their chemicals and submit the results to EPA with their premanufacture notices. Currently, such a step is only required if EPA makes the necessary findings and promulgates a testing rule. A major drawback to testing is its cost to chemical companies, possibly resulting in a reduced willingness to perform chemical research and innovation. To ameliorate such costs, or to delay them until the new chemicals are produced in large enough quantity to offset the cost of testing, requirements for testing could be based on production volume. For example, in Canada and the EU, testing requirements for low-volume chemicals are less extensive and complex than for those for high-volume chemicals. Another option would be to provide EPA with greater authority to require testing targeted to those areas in which EPA’s structure activity relationship (SAR) analysis does not adequately predict toxicity. For example, EPA could be authorized to require such testing if it finds that it cannot be confident of the results of its SAR analysis (e.g., when it does not have sufficient toxicity data on chemicals with molecular structures similar to those of the new chemicals submitted by chemical companies.) Under such an option, EPA could establish a minimal set of tests for new chemicals to be submitted at the time a chemical company submits a premanufacture notice for the chemical for EPA’s review. Additional and more complex and costly testing could be required as the new chemical’s potential risks increase, based on production or environmental release levels. According to some chemical companies, the cost of initial testing could be reduced by amending TSCA to require EPA to review new chemicals before they are marketed, rather than before they are manufactured. In this regard, according to EPA, about half of the premanufacture notices the agency receives from chemical companies are for new chemicals that, for various reasons, never enter the marketplace. Thus, requiring companies to conduct tests and submit the resulting test data only for chemicals that are actually marketed would be substantially less expensive than requiring them to test all new chemicals submitted for EPA’s review. TSCA’s chemical review provisions could be strengthened by requiring the systematic review of existing chemicals. In requiring that EPA review premanufacture notices within 90 days, TSCA established a firm requirement for reviewing new chemicals, but the act contains no similar requirement for existing chemicals unless EPA determines by rule that they are being put to a significant new use. TSCA could be amended to establish a time frame for the review of existing chemicals, putting existing chemicals on a more equal footing with new chemicals. However, because of the large number of existing chemicals, EPA would need the flexibility to identify which chemicals should be given priority. TSCA could be amended to require individual chemical companies or the industry as a whole to compile and submit chemical data, such as that included in the HPV Challenge Program to EPA, for example, as a condition of manufacture or import above some specified volume. Given the thousands of chemicals in use and the many ways that exposures and releases to the environment can occur, TSCA’s chemical-by-chemical approach means that the act is unlikely to address more than the most serious chemical risks. The process of collecting information on chemical effects and exposures to support regulatory actions under TSCA is a resource intensive and time-consuming process. A different approach would be to set goals for reducing the use of toxic chemicals overall. Under this approach, legislation could establish national goals for reductions in the use of toxic chemicals and provide EPA with various tools, such as pollution taxes and other economic incentives to encourage chemical companies to engage in risk reduction activities. This approach differs from a command-and-control approach in which the regulator specifies how pollution must be reduced or what pollution control technology must be used. An approach employing economic incentives gives companies more flexibility in choosing how to reduce pollution and could lead to more cost-effective solutions to pollution problems. An approach employing economic incentives can take several forms, including systems under which firms can buy and sell emission reduction credits and pollution taxes. A pollution tax is a tax on the emissions of a pollutant or on harmful products or substances. Such a tax would have to be carefully designed and implemented to be effective in achieving environmental and economic benefits. Because of their inherently greater flexibility, market-based incentives may be both a less costly and a more effective means of controlling pollution. More chemicals could also be addressed under TSCA if the Congress were to amend TSCA to expand the types of circumstances under which EPA could take action under the act to specifically include situations in which (1) it identifies pollution prevention opportunities, such as when safer chemical substitutes can be shown to exist at a reasonable cost, or (2) the use of a toxic chemical cannot be shown to pose a current problem, but its continued use could be a long-term problem because it persists in the environment or accumulates in plant or animal tissue. To better support EPA’s pollution prevention initiatives, TSCA could also be amended to expand the range of regulatory control options available to EPA to reduce chemical risks. Such additional options could include the authority to require the use of safer chemical substitutes or manufacturing processes that result in less exposure or fewer environmental releases. Our objectives were to review the Environmental Protection Agency’s (EPA) efforts to (1) control the risks of new chemicals not yet in commerce, (2) assess existing chemicals used in commerce, and (3) publicly disclose information provided by chemical companies under the Toxic Substances Control Act (TSCA). In addressing these issues we also obtained information on EPA’s voluntary chemical control programs that complement TSCA, the chemical control programs of Canada and the European Union (EU), and identified some legislative options that GAO and others have previously noted could strengthen EPA’s authority to assess and regulate chemicals under TSCA. To review the extent to which EPA has assessed the risks of new and existing chemicals and has made information obtained under TSCA public, we reviewed the relevant provisions of TSCA, identified and analyzed EPA’s regulations on how the new and existing chemical review and control programs work, including the handling of confidential information, and determined the extent of actions taken by EPA to control chemicals. These efforts were augmented by interviews with EPA officials and representatives of the American Chemistry Council (a national chemical manufacturers association), Environmental Defense (a national, nonprofit, environmental advocacy organization), and the Synthetic Organic Chemical Manufacturer’s Association (a national, specialty chemical manufacturer’s association). We also obtained and reviewed documentation provide to EPA by the states on the usefulness of confidential business information to states. We interviewed several EPA officials to assess the reliability of data related to assessment and control of new chemicals. We determined the data were sufficiently reliable for the purposes of this report. To understand efforts EPA has taken to assess and control the risks of new and existing chemicals, we identified several voluntary programs designed to promote environmentally safer chemicals and to gather information to assess the risks of chemicals, in particular, EPA’s Sustainable Futures Program, Voluntary Children’s Chemical Evaluation Program (VCCEP), and the High Production Volume (HPV) Challenge Program. We selected Sustainable Futures because it is a risk assessment tool used to complement EPA’s other pollution prevention programs. Sustainable Futures represents a pollution prevention program that impacts manufacturer’s chemical decision-making process for chemicals not yet in commerce; while other pollution prevention programs focus on chemicals already in commerce. We selected the HPV Challenge Program and VCCEP because they represent significant data collection efforts to provide information for EPA’s assessment of existing chemicals. To enhance our understanding, we interviewed EPA officials and representatives at American Chemistry Council, Environmental Defense, and the Synthetic Organic Chemical Manufacturer’s Association; we also attended EPA’s National Toxic and Pollution Prevention Advisory Committee meetings. Finally, we obtained and reviewed agency documents related to these programs. To understand other chemical control regulation, we collected documentation and interviewed individuals knowledgeable about (1) the Toxic Substances Control Act and (2) foreign chemical control laws or proposed legislation: (a) the Canadian Environmental Protection Act 1999 and (b) the European Union’s Chemical Directives and proposed Registration, Evaluation and Authorization of Chemicals. The EU and Canada were chosen because they have recently taken action to revise their chemical legislation. In 1999, Canada revised its chemical control law and in 2003, the EU proposed a new regulation. The EU and Canada were also selected because they have characteristics that are similar to those of the United States: Canada and the EU member countries are industrialized nations and have extensive experience with the review and control of chemical substances. In addition, Canada and the EU produce a considerable amount of chemicals. Furthermore, EPA officials and chemical industry representatives recommended these countries for comparison with TSCA. For each of the countries, we obtained laws, technical literature, and government documents that describe their chemical control programs. We also interviewed foreign officials responsible for implementing the chemical substances control laws in Canada and for representing the European Commission in the United States. Our descriptions of these countries’ laws are based on interviews with government officials and written materials they provided. To identify potential options to strengthen EPA’s ability to assess and regulate chemical risks under TSCA, we (1) interviewed officials at EPA, the American Chemistry Council, Environmental Defense, EPA’s National Toxic and Pollution Prevention Advisory Committee, and the Synthetic Organic Chemical Manufacturer’s Association; (2) reviewed pertinent literature, including prior GAO reports and congressional hearings on TSCA; (3) attended various public meetings and conferences sponsored by EPA and others; and (4) reviewed chemical legislation in Canada and and proposed legislation in the EU. This report does not discuss all possible options for revising TSCA. Those options that are discussed were selected because they have been identified as addressing constraints in EPA's authority under the act. Our selection of these options reflects (1) our knowledge of EPA’s implementation of TSCA obtained during this and previous reviews of the agency’s toxics programs, (2) foreign countries’ approaches to reviewing and controlling harmful chemicals, and (3) views provided by U.S. government officials and representatives of the chemical industry and environmental groups. Our review was performed between June 2004 and April 2005 in accordance with generally accepted government auditing standards. The Environmental Protection Agency (EPA) has promulgated rules under section 6 of the Toxic Substances Control Act (TSCA) to place restrictions on five existing chemicals or chemical categories and four new chemicals. The five existing chemicals/chemical categories are polychlorinated biphenyls (PCB), fully halogenated chlorofluoroalkanes, dioxin, asbestos and hexavalent chromium. The four new chemicals are all used in metal working fluids that, when combined with nitrites, could cause the formation of a cancer causing substance. EPA’s rules for the four new chemicals were immediately effective, unlike EPA’s rules for existing chemicals, which required a comment period. Because the Congress believed that PCBs posed a significant risk to public health and the environment, section 6(e) of TSCA prohibited the manufacture, processing, distribution in commerce, or use of PCBs other than in a totally enclosed manner after January 1, 1978, unless otherwise authorized by EPA rule. Under TSCA, EPA may, by rule, authorize the manufacture, processing, distribution in commerce or use of any PCB in a manner other than a totally enclosed manner if EPA finds that it will not present an unreasonable risk of injury to health or the environment. EPA was also required by July 1977 to promulgate rules to (1) prescribe methods for PCB disposal and (2) require PCBs to be marked with clear and adequate warnings and instructions with respect to their processing, distribution in commerce, use, or disposal. EPA has issued various rules to implement these statutory requirements and provide for some exemptions to the PCB prohibitions. About 50 percent of PCBs were used in electrical, heat transfer, and hydraulic equipment. PCBs were also used in numerous other applications, including plasticizers and fire retardants. Approximately half of the PCBs manufactured were disposed of or released into the environment prior to EPA promulgating rules for the disposal requirements under TSCA. PCBs are toxic and very persistent in the environment. When released into the environment, they decompose very slowly and can accumulate in plants, animals, and human tissue. Laboratory tests show that they cause cancer in rats and mice and that they have adverse effects on fish and wildlife. In 1978, EPA banned nonessential uses of fully halogenated chlorofluoroalkanes as propellants in aerosol spray containers. EPA took this action because of concerns that these chemicals were destroying the upper atmosphere’s ozone layer, which shields the earth from ultraviolet radiation. Increased exposure to ultraviolet radiation has been linked to increased skin cancer. Depletion of the ozone layer is also thought to lead to climate changes and other adverse effects. Chlorofluorocarbons, halons, and other fully halogenated chlorofluoroalkanes have been relied upon for applications including air conditioning, refrigeration, fire suppression, insulation, and solvent cleaning. According to EPA officials, in advance of its obligations under the Montreal Protocol, the United States began phasing out production of the most potent ozone depleting chemicals in 1994 and is now gradually phasing out hydrofluorocarbon production as well. According to EPA officials, other industrialized countries have followed the U.S. lead, and developing countries with assistance from the Multilateral Fund are now complying with the protocol phase out requirements. The regulation of fully halogenated chlorofluoroalkanes was eliminated in 1995 by an EPA final rule because EPA had banned such chlorofluorocarbons propellants under the Clean Air Act, making the TSCA rule obsolete. In 1980, EPA promulgated a rule prohibiting Vertac Chemical Company and others from removing for disposal certain wastes containing 2,3,7,8- tetrachlorodibenzo-p-dioxin (TCDD) stored at Vertac’s Jacksonville, Arkansas, facility. The rule also required any persons planning to dispose of TCCD contaminated wastes to notify EPA 60 days before their intended disposal. TCDD, one of the most toxic of the about 75 dioxins in existence and an animal carcinogen, is a contaminant or waste product formed during the manufacture of certain substances. EPA concluded that it was likely to result in adverse human health effects. This TSCA action was superseded by a 1985 Resource Conservation and Recovery Act regulation. Asbestos, which refers to several minerals that typically separate into very tiny fibers, is a known human carcinogen that can cause lung cancer and other diseases if inhaled. Asbestos containing materials were used widely for fireproofing, thermal and acoustical insulation, and decoration in building construction and renovation before the adverse effects of asbestos were known. Asbestos also has numerous other applications, for example, in friction products such as brake linings. After initially regulating asbestos under the Clean Air Act in the early 1970s, EPA issued a final rule under TSCA to ban the manufacturing, importing, and processing of nearly all asbestos products in July 1989. The rule was to begin phasing out asbestos-containing products in August 1990, and complete the phaseout by 1997. EPA’s rule was challenged in federal court by asbestos product manufacturers, and in October 1991, the United States Court of Appeals for the Fifth Circuit vacated most of the rule—the rule continued to apply to asbestos products no longer in commerce—and remanded it to the agency for further consideration. In 1990, EPA banned the use of hexavalent chromium-based water treatment chemicals in comfort cooling towers (CCT) and the distribution of them in commerce for use in CCTs on the basis of health risks associated with human exposure to air emissions. According to EPA, hexavalent chromium was being released from a large number of unidentified cooling towers. At the time, hexavalent chromium was a known human carcinogen. EPA could have issued an emissions standard under the Clean Air Act. However, the agency believed that regulation under TSCA would be more efficient and effective because the act could be used to regulate use and distribution of hexavalent chromium-based water treatment chemicals. EPA issued proposed rules to impose certain controls on four new chemicals: (1) mixed mono and diamides of an organic acid, (2) triethanolamine salts of a substituted organic acid, (3) triethanolanime salt of tricarboxylic acid, and (4) tricarboxylic acid. The agency determined these chemicals would pose an unreasonable risk to human health or the environment. According to EPA, adding nitrites or other nitrosating agents to the substances causes the formation of a substance known to cause cancer in laboratory animals. EPA promulgated the rules regulating these chemicals in 1984 to prohibit adding any nitrosating agent, including nitrites, to metal working fluids that contain these substances. EPA promulgated the rules under TSCA section 5(f). Under this section of TSCA, if EPA determines that there is a reasonable basis to conclude that the manufacturing, processing, distribution in commerce, or disposal of a new chemical presents or will present an unreasonable risk of injury to health or the environment before EPA can promulgate a rule under TSCA section 6, EPA may limit the amount or impose other restrictions via an immediately effective proposed rule. The restrictions on these chemicals remain in place today. In addition to the individual named above, David Bennett, John Delicath, Richard Frankel, Ed Kratzer, Malissa Livingston, Jean McSween, Marcella Phelps, and Amy Webbink made key contributions to this report. | Chemicals play an important role in everyday life, but some may be harmful to human health and the environment. Chemicals are used to produce items widely used throughout society, including consumer products such as cleansers, paints, plastics, and fuels, as well as industrial solvents and additives. However, some chemicals, such as lead and mercury, are highly toxic at certain doses and need to be regulated because of health and safety concerns. In 1976, the Congress passed the Toxic Substances Control Act (TSCA) to authorize the Environmental Protection Agency (EPA) to control chemicals that pose an unreasonable risk to human health or the environment. GAO reviewed EPA's efforts to (1) control the risks of new chemicals not yet in commerce, (2) assess the risks of existing chemicals used in commerce, and (3) publicly disclose information provided by chemical companies under TSCA. EPA's reviews of new chemicals provide limited assurance that health and environmental risks are identified before the chemicals enter commerce. Chemical companies are not required by TSCA, absent a test rule, to test new chemicals before they are submitted for EPA's review, and companies generally do not voluntarily perform such testing. Given limited test data, EPA predicts new chemicals' toxicity by using models that compare the new chemicals with chemicals of similar molecular structures that have previously been tested. However, the use of the models does not ensure that chemicals' risks are fully assessed before they enter commerce because the models are not always accurate in predicting chemical properties and toxicity, especially in connection with general health effects. Nevertheless, given the lack of test data and health and safety information available to the agency, EPA believes the models are generally useful as screening tools for identifying potentially harmful chemicals and, in conjunction with other information, such as the anticipated potential uses and exposures of the new chemicals, provide a reasonable basis for reviewing new chemicals. The agency recognizes, however, that obtaining additional information would improve the predictive capabilities of its models. EPA does not routinely assess the risks of all existing chemicals and EPA faces challenges in obtaining the information necessary to do so. TSCA's authorities for collecting data on existing chemicals do not facilitate EPA's review process because they generally place the costly and time-consuming burden of obtaining data on EPA. Partly because of a lack of information on existing chemicals, EPA, in partnership with industry and environmental groups, initiated the High Production Volume (HPV) Challenge Program in 1998, under which chemical companies began voluntarily providing information on the basic properties of chemicals produced in large amounts. It is unclear whether the program will produce sufficient information for EPA to determine chemicals' risks to human health and the environment. EPA has limited ability to publicly share the information it receives from chemical companies under TSCA. TSCA prohibits the disclosure of confidential business information, and chemical companies claim much of the data submitted as confidential. While EPA has the authority to evaluate the appropriateness of these confidentiality claims, EPA states that it does not have the resources to challenge large numbers of claims. State environmental agencies and others are interested in obtaining confidential business information for use in various activities, such as developing contingency plans to alert emergency response personnel of the presence of highly toxic substances at manufacturing facilities. Chemical companies recently have expressed interest in working with EPA to identify ways to enable other organizations to use the information given the adoption of appropriate safeguards. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In March 2008, then Deputy Attorney General Craig Morford issued a memorandum—also known as the “Morford Memo”—to help ensure th the monitor selection process is collaborative, results in the selection of a highly-qualified monitor suitable for the assignment, avoids potential conflicts of interest, and is carried out in a manner that instills public confidence. The Morford Memo requires USAOs and other DOJ litiga divisions to establish ad hoc or standing committees consisting of the office’s ethics advisor, criminal or section chief, and at least one other experienced prosecutor to consider the candidates—which may be proposed by either prosecutors, companies, or both—for each monitorship. DOJ components are also reminded to follow spec ified federal conflict of interest guidelines and to check monitor candidates for tion potential conflicts of interest relationships with the company. In addition, the names of all selected monitors for DPAs and NPAs must be submitted to ODAG for final approval. Following issuance of the Morford Memo, DOJ entered into 35 DPAs and NPAs, 6 of which required the company to hire an individual to oversee the company’s compliance with the terms of the DPA. As of November 2009, DOJ had selected monitors for 4 of the 6 agreements. Based on o discussions with prosecutors and documentation from DOJ, we determined that for these 4 agreements, DOJ made the selections accordance with Morford Memo guidelines. Further, while the Morfo Memo does not specify a selection process that must be used in all cases it suggests that in some cases it may be appropriate for the company to select the monitor or propose a pool of qualified candidates from which DOJ will select the monitor. In all 4 of these cases, the company either selected the monitor, subject to DOJ’s approval, or provided DOJ with proposed monitor candidates from among which DOJ selected the monitor. However, while we were able to determine that the prosec complied with the Morford Memo based on information obtained through our interviews, DOJ did not fully document the selection and approval process for 2 of the 4 monitor selections. The lack of such documentati will make it difficult for DOJ to validate to an independent third-party s reviewer, as well as to Congress and the public, that prosecutors acros DOJ offices followed Morford Memo guidelines and that monitors were selected in a way that was fair and merit based. For example, for 1 of these 2 agreements, DOJ did not document who in the U.S. Attorney’s Office was on involved in reviewing the monitor candidates, which is important becaus e the Morford Memo requires that certain individuals in the office be part of the committee to consider the selection or veto of monitor candidates in order to ensure monitors are not selected unilaterally. For the second agreement, the Deputy Attorney General’s approval of the selected monitor was relayed via telephone and not documented. As a result, in order to respond to our inquiries, DOJ officials had to reach out to individuals who were involved in the telephone call, one of whom w longer a DOJ employee, to obtain information regarding the monitor’s approval. Documenting the reasons for selecting a particular monitor helps avoid the appear and practices—which are intended to instill public confidence in the monitor selection process—were followed. Therefore, in our June 25, 2009, testimony, we recommended that the Deputy Attorney General ado or internal procedures to document both the process used and reasons f monitor selection decisions. DOJ agreed with our recommendation an in August 2009, instituted such procedures. Specifically, DOJ requires ODAG to complete a checklist confirming receipt of the monitor selectio submission—including the process used and reasons for selecting the monitor—from the DOJ component; ODAG’s review, recommendation,ance of favoritism and verifies that Morford Memo processes and decision to either approve or reject the proposed monitor; the DOJ component’s notification of ODAG’s decision; and ODAG’s documentat of these steps. For the two monitors selected during or after August 2009, DOJ provided us with completed checklists to confirm that ODAG had followed the new procedures. While DOJ selected monitors in accordance with the Morford Memo, monitor selections have been d after the Morford Memo was issued. The selection of one monitor took 15 months from the time the agreement was signed and selection of two monitors, as discussed above, has been delayed for more than 17 months elayed for three agreements entered into from the time the agreement was signed. According to DOJ, the delays in selecting these three monitors have been due to challenges in identifyi ng candidates with proper experience and resources who also do not have potential conflicts of interest with the company. Further, DOJ’s selection of monitors in these three cases took more time than its selection of monitors both prior to and since the issuance of the Morford Memo— which on average was about 2 months from the time the NPA or DPA was signed or filed. According to the Senior Counsel to the Assistant Attorney General for the Criminal Division, for these three agreements, the prosecutors overseeing the cases have communicated with the companies to ensure that they are complying with the agreements. Further, DOJ reported that the prosecutors are working with each of the companies to extend the duration of the DPAs to ensure that the duties and goals of each monitorship are fulfilled and, as of October 2009, an agreement to extend the monitorship had been signed for one of the DPAs. Such action by DOJ will better position it to ensure that the companies are in compliance with the agreements while awaiting the selections of the monitors. For the 48 DPAs and NPAs where DOJ required independent monitors, companies have hired a total of 42 different monitors, more than half of whom were former DOJ employees. Specifically, of these 42 monitors, 23 previously worked at DOJ, while 13 did not. The 23 monitors held various DOJ positions, including Assistant U.S. Attorney, Section Chief or Division Chief in a litigating component, U.S. Attorney, Assistant Attorney General, and Attorney General. The length of time between the monitor’s separation from DOJ and selection as monitor ranged from 1 year to more than 30 years, with an average of 13 years. Five individuals were selected to serve as monitors within 3 years or less of being employed at DOJ. In addition, 8 of these 23 monitors had previously worked in the USAO or DOJ litigating component that oversaw the DPA or NPA for which they were the monitor. In these 8 cases, the length of time between the monitor’s separation from DOJ and selection as monitor ranged from 3 years to 34 years, with an average of almost15 years. Of the remaining 13 monitors with no previous DOJ experience, 6 had previous experience at a state or local government agency, for example, as a prosecutor in a district attorney’s office; 3 had worked in federal agencies other than DOJ, including the Securities and Exchange Commission and the Office of Management and Budget; 2 were former judges; 2 were attorneys in the military; 3 had worked solely in private practice in a law firm; and 1 had worked as a full-time professor. Of the 13 company representatives with whom we spoke who were required to hire independent monitors, in providing perspectives on monitors’ previous experience, representatives from 5 of these companies stated that prior employment at DOJ or an association with a DOJ employee could impede the monitor’s independence and impartiality, whereas representatives from the other 8 companies disagreed. Specific concerns raised by the 5 companies—2 of which had monitors with prior DOJ experience—included the possibility that the monitor would favor DOJ and have a negative predisposition toward the company or, if the monitor recently left DOJ, the monitor may not be considered independent; however, none of the companies identified specific instances with their monitors where this had occurred. Of the remaining 8 company representatives who did not identify concerns, 6 of them worked with monitors who were former DOJ employees, and some of these officials commented on their monitors’ fairness and breadth of experience. In addition 5 company representatives we spoke with who were involved in the monitor selection process said that they were specifically looking for monitors with DOJ experience and knowledge of the specific area of law that the company violated. Officials from 8 of the 13 companies with whom we spoke raised concerns about their monitors, which were either related to how monitors were carrying out their responsibilities or issues regarding the overall cost of the monitorship. However, these companies said that it was unclear to what extent DOJ could help to address these concerns. Seven of the 13 companies identified concerns about the scope of the monitor’s responsibilities or the amount of work the monitor completed. For example, 1 company said that the monitor had a large number of staff assisting him on the engagement, and he and his staff attended more meetings than the company felt was necessary, some of which were unrelated to the monitor responsibilities delineated in the agreement, such as a community service organization meeting held at the company when the DPA was related to securities fraud. As a result, the company believes that the overall cost of the monitorship—with 20 to 30 lawyers billing the company each day—was higher than necessary. Another company stated that its monitor did not complete the work required in the agreement in the first phase of the monitorship—including failing to submit semi-annual reports on the company’s compliance with the agreement to DOJ during the first 2 years of the monitorship— resulting in the monitor having to complete more work than the company anticipated in the final phase of the monitorship. According to the company, this led to unexpectedly high costs in proportion to the company’s revenue in the final phase, which was significant because the company is small. Further, according to a company official, the monitor’s first report contained numerous errors that the company did not have sufficient time to correct before the report was submitted to DOJ and, thus, DOJ received a report containing errors. While 6 of the 13 companies we interviewed did not express concerns about the monitor’s rates, 3 companies expressed concern that the monitor’s rate (which ranged from $290 per hour to a rate of $695 to $895 per hour among the companies that responded to our survey) was high. Further, while 9 of the 13 companies that responded to our survey believed that the total compensation received by the monitor or monitoring firm was reasonable for the type and amount of work performed (which, according to the companies that responded to our survey, ranged from $8,000 to $2.1 million per month), 3 companies did not believe it was reasonable. When asked how they worked to resolve these issues with the monitor, companies reported that they were unaware of any mechanisms available to resolve the issues—including DOJ involvement—or if they were aware that DOJ could get involved they were reluctant to seek DOJ’s assistance. Specifically, three of the eight companies that identified concerns with their monitor were not aware of any mechanism in place to raise these concerns with DOJ. Four companies were aware that they could raise these concerns with DOJ, but three of these companies said that they would be reluctant to raise these issues with DOJ in fear of repercussions. Another company did not believe that DOJ had the authority to address their concerns because they were related to staffing costs, which were delineated in the contract negotiated between the company and the monitor, not the DPA. However, DOJ had a different perspective than the company officials on its involvement in resolving disputes between companies and monitors. According to the Senior Counsel to the ODAG, while DOJ has not established a mechanism through which companies can raise concerns with their monitors to DOJ and clearly communicated to companies how they should do so, companies are aware that they can raise monitor- related concerns to DOJ if needed. Further, it was the Senior Counsel’s understanding that companies frequently raise issues regarding DPAs and NPAs to DOJ without concerns about retribution, although to his knowledge, no companies had ever raised monitor-related concerns to ODAG. The Senior Counsel acknowledged, however, that even if companies did raise concerns to DOJ regarding their monitors, the point in the DPA process at which they did so may determine the extent of DOJ’s involvement. Specifically, according to this official, while he believed that DOJ may be able to help resolve a dispute after the company and monitor enter into a contract, he stated that, because DOJ is not a party to the contract, if a conflict were to arise over, for instance, the monitor’s failure to complete periodic reports, DOJ could not compel the monitor to complete the reports, even if the requirement to submit periodic reports was established in the DPA or NPA. In contrast, the Senior Counsel said that if the issues between monitors and companies arise prior to the two parties entering into a contract, such as during the fee negotiation phase, DOJ may be able to play a greater role in resolving the conflict. However, the mechanisms that DOJ could use to resolve such issues with the monitor are uncertain since while the monitor’s role is delineated in the DPA, there is no contractual agreement between DOJ and the monitor. DOJ is not a party to the monitoring contract signed by the company and the monitor, and the monitor is not a party to the DPA signed by DOJ and the company. We are aware of at least one case in which the company sought DOJ’s assistance in addressing a conflict with the monitor regarding fees, prior to the monitor and company signing their contract. Specifically, one company raised concerns about the monitor to the U.S. Attorney handling the case, stating that, among other things, the company believed the monitor’s fee arrangement was unreasonably high and the monitor’s proposed billing arrangements were not transparent. The U.S. Attorney declined to intervene in the dispute stating that it was still at a point at which the company and the monitor could resolve it. The U.S. Attorney instructed the company to quickly resolve the dispute directly with the monitor—noting that otherwise, the dispute might distract the company and the monitor from resolving the criminal matters that were the focus of the DPA. The U.S. Attorney also asked the company to provide an update on its progress in resolving the conflict the following week. A legal representative of the company stated that he did not believe he had any other avenue for addressing this dispute after the U.S. Attorney declined to intervene. As a result, although the company disagreed with the high fees, it signed the contract because it did not want to begin the monitorship with a poor relationship with the monitor resulting from a continued fee dispute. The Senior Counsel to the ODAG stated that because the company is signatory to both the DPA or NPA and the contract with the monitor, it is the company’s responsibility to ensure that the monitor is performing the duties described in the agreement. However, 5 of the 7 companies that had concerns about the scope of the monitor’s responsibilities or the amount of work the monitor completed did not feel as if they could adequately address their issues by discussing them with the monitors. This is because two companies said that they lacked leverage to address issues with monitors and two companies feared repercussions if they raised issues with their monitors. The Senior Counsel stated that one way the company could hold the monitor accountable is by incorporating the monitor requirements listed in the DPA into the monitoring contract and additionally include a provision in the contract that the monitor can be terminated for not meeting these requirements. However, the companies that responded to our survey did not generally include monitor termination provisions in their contracts. Specifically, 7 of the 13 companies that responded to our survey reported that their monitoring contract contained no provisions regarding termination of the monitor, and another 3 companies reported that their contract contained a clause that actually prohibited the company from terminating the monitor. Only 1 company that responded to our survey reported that the contract allowed it to terminate the monitor with written notice at any time, once the company and DOJ agreed (and subject to the company’s obligation to pay the monitor). This contract also included a provision allowing for the use of arbitration to resolve disputes between the company and the monitor over, for instance, services rendered and fees. In order to more consistently include such termination clauses in the monitoring contracts, companies would need the monitor’s consent. Given that DOJ makes the final decision regarding the selection of a particular monitor—and that DOJ allows for, but does not require, company involvement in the monitor selection process—it is uncertain how much leverage the company would have to negotiate that such termination or dispute resolution terms be included in the contract with the monitor. Because monitors are one mechanism that DOJ uses to ensure that companies are reforming and meeting the goals of DPAs and NPAs, DOJ has an interest in monitors performing their duties properly. While over the course of our review, we discussed with DOJ officials various mechanisms by which conflicts between companies and monitors could be resolved, including when it would be appropriate for DOJ to be involved, DOJ officials acknowledged that prosecutors may not be having similar discussions with companies about resolving conflict. This could lead to differing perspectives between DOJ and companies on how such issues should be addressed. Internal control standards state that agency management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. According to DOJ officials, the Criminal Division Fraud Section has made some efforts to clarify what role it will play in resolving disputes between the company and the monitor. For example, 11 of 17 DPAs or NPAs entered into by the Fraud Section that required monitors allowed companies to bring to DOJ’s attention any disputes over implementing recommendations made by monitors during the course of their reviews of company compliance with DPAs and NPAs. In addition, 8 of these 11 agreements provide for DOJ to resolve disputes between the company and the monitor related to the work plan the monitor submitted to DOJ and the company before beginning its review of the company. Additionally, in 5 agreements entered into by one USAO, the agreement specified that the company could bring concerns about unreasonable costs of outside professionals—such as accountants or consultants—hired by the monitor to the USAO for dispute resolution. While the Criminal Division Fraud Section and one USAO have made efforts to articulate in the DPA or NPA the extent to which DOJ would be willing to be involved in resolving specific kinds of monitor issues for that particular case, other DOJ litigating divisions and USAOs that entered into DPAs and NPAs have not. Clearly communicating to companies and monitors in each DPA and NPA the role DOJ will play in addressing companies’ disputes with monitors would help better position DOJ to be notified of potential issues companies have identified related to monitor performance. According to DOJ, DPAs and NPAs can be invaluable tools for fighting corporate corruption and helping to rehabilitate a company, although use of these agreements has not been without controversy. DOJ has taken steps to address concerns that monitors are selected based on favoritism or bias by developing and subsequently adhering to the Morford Memo guidelines. However, once the monitors are selected and any issues—such as fee disputes or concerns with the amount of work the monitor is completing—arise between the monitor and the company, it is not always clear what role, if any, DOJ will play in helping to resolve these issues. Clearly communicating to companies and monitors the role DOJ will play in addressing companies’ disputes with monitors would help better position DOJ to be made aware of issues companies have identified related to monitor performance, which is of interest to DOJ since it relies on monitors to assess companies’ compliance with DPAs and NPAs. We are continuing to assess the potential need for additional guidance or other improvements in the use of DPAs and NPAs in our ongoing work. To provide clarity regarding DOJ’s role in resolving disputes between companies and monitors, the Attorney General should direct all litigating components and U.S. Attorneys Offices to explain in each corporate DPA or NPA what role DOJ could play in resolving such disputes, given the facts and circumstances of the case. We requested comments on a draft of this statement from DOJ. DOJ did not provide official written comments to include in the statement. However, in an email sent to us on November 17, 2009, DOJ provided technical comments, which we incorporated into the statement, as appropriate. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Kristy N. Brown, Jill Evancho, Tom Jessor, Sarah Kaczmarek, Danielle Pakdaman, and Janet Temko, as well as Katherine Davis and Amanda Miller. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Recent cases of corporate fraud and mismanagement heighten the Department of Justice's (DOJ) need to appropriately punish and deter corporate crime. Recently, DOJ has made more use of deferred prosecution and non-prosecution agreements (DPAs and NPAs), in which prosecutors may require company reform, among other things, in exchange for deferring prosecution, and may also require companies to hire an independent monitor to oversee compliance. This testimony addresses (1) the extent to which prosecutors adhered to DOJ's monitor selection guidelines, (2) the prior work experience of monitors and companies' opinions of this experience, and (3) the extent to which companies raised concerns about their monitors, and whether DOJ had defined its role in resolving these concerns. Among other steps, GAO reviewed DOJ guidance and examined the 152 agreements negotiated from 1993 (when the first 2 were signed) through September 2009. GAO also interviewed DOJ officials, obtained information on the prior work experience of monitors who had been selected, and interviewed representatives from 13 companies with agreements that required monitors. These results, while not generalizable, provide insights into monitor selection and oversight. Prosecutors adhered to DOJ guidance issued in March 2008 in selecting monitors required under agreements entered into since that time. Monitor selections in two cases have not yet been made due to challenges in identifying candidates with proper experience and resources and without potential conflicts of interests with the companies. DOJ issued guidance in March 2008 to help ensure that the monitor selection process is collaborative and based on merit; this guidance also requires prosecutors to obtain Deputy Attorney General approval for the monitor selection. For DPAs and NPAs requiring independent monitors, companies hired a total of 42 different individuals to oversee the agreements; 23 of the 42 monitors had previous experience working for DOJ--which some companies valued in a monitor choice--and those without prior DOJ experience had worked in other federal, state, or local government agencies, the private sector, or academia. The length of time between the monitor's leaving DOJ and selection as a monitor ranged from 1 year to over 30 years, with an average of 13 years. While most of the companies we interviewed did not express concerns about monitors having prior DOJ experience, some companies raised general concerns about potential impediments to independence or impartiality if the monitor had previously worked for DOJ or had associations with DOJ officials. Representatives for more than half of the 13 companies with whom GAO spoke raised concerns about the monitor's cost, scope, and amount of work completed--including the completion of compliance reports required in the DPA or NPA--and were unclear as to the extent DOJ could be involved in resolving such disputes, but DOJ has not clearly communicated to companies its role in resolving such concerns. Companies and DOJ have different perceptions about the extent to which DOJ can help to resolve monitor disputes. DOJ officials GAO interviewed said that companies should take responsibility for negotiating the monitor's contract and ensuring the monitor is performing its duties, but that DOJ is willing to become involved in monitor disputes. However, some company officials were unaware that they could raise monitor concerns to DOJ or were reluctant to do so. Internal control standards state that agency management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. While one of the DOJ litigating divisions and one U.S. Attorney's Office have made efforts to articulate in the DPAs and NPAs what role they could play in resolving monitor issues, other DOJ litigation divisions and U.S. Attorney's Offices have not done so. Clearly communicating to companies the role DOJ will play in addressing companies' disputes with monitors would help increase awareness among companies and better position DOJ to be notified of potential issues related to monitor performance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
We reviewed the Department of the Treasury’s report, interviewed senior IRS officials responsible for the actions being taken to correct the management and technical weaknesses, and reviewed documentation. On June 4, 1996, we briefed senior Treasury and IRS officials, including the Deputy Secretary of the Treasury and the Commissioner of the IRS, on the results of our review. We performed our work at IRS headquarters in Washington, D.C., between May 9, 1996 and June 4, 1996 in accordance with generally accepted government auditing standards. The Department of the Treasury and IRS provided comments on a draft of this report, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. IRS envisions a modernized tax processing environment which is virtually paper free and in which taxpayer information is readily available to IRS employees to update taxpayer accounts and respond to taxpayer inquiries. In our July 1995 report, we emphasized the need for IRS to have in place sound management and technical practices to increase the likelihood that TSM’s objectives will be cost-effectively and expeditiously met. A 1996 National Research Council report on TSM had a similar message. Its recommendations parallel the over a dozen recommendations we made in July 1995 to improve IRS’ (1) business strategy to reduce reliance on paper, (2) strategic information management practices, (3) software development capabilities, (4) technical infrastructures, and (5) organizational controls. In the July 1995 report, we described our methodology for analyzing IRS’ strategic information management practices, drawing heavily from our research on the best practices of private and public sector organizations that have been successful in improving their performance through strategic information management and technology. These fundamental best practices are discussed in our report Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994), and our Strategic Information Management (SIM) Self-Assessment Toolkit (GAO/Version 1.0, October 28, 1994, exposure draft). To evaluate IRS’ software development capability, we validated IRS’ September 1993 assessment of its software development maturity based on the Capability Maturity Model (CMM) developed by Carnegie Mellon University’s Software Engineering Institute, a nationally recognized authority in the area. This model establishes standards in key software development process areas (i.e., requirements management, project planning, project tracking and oversight, configuration management, quality assurance, and subcontractor management) and provides a framework to evaluate a software organization’s capability to consistently and predictably produce high-quality products. When we briefed the IRS Commissioner in April 1995 and issued our report documenting its weaknesses in July 1995, IRS agreed with our recommendations to make corrections expeditiously. At that time, we considered IRS’ response to be a commitment to correct its management and technical weaknesses. In September 1995, IRS submitted an action plan to the Congress explaining how it planned to address our recommendations. In our March 1996 testimony to the House Appropriation Committee’s Subcommittee on Treasury, Postal Service, and General Government, we noted that this plan, follow-up meetings with senior IRS officials, and other draft and “preliminary draft” documents received through early March 1996 provided little tangible evidence that actions being taken would correct the pervasive management and technical weaknesses that continued to place TSM, and the huge investment it represents, at risk. This interim status report on IRS’ efforts to respond to our July 1995 recommendations noted that IRS had initiated a number of activities and made some progress in addressing our recommendations to improve management of information systems; enhance its software development capability; and better define, perform, and manage TSM’s technical activities. However, we reported that none of these steps had fully satisfied any of our recommendations. Consequently, IRS was not in an appreciably better position in March 1996 than it was in April 1995 to assure the Congress that it would spend its fiscal year 1996 and future TSM appropriations judiciously and effectively. In a subsequent testimony before the Senate Committee on Governmental Affairs, we reiterated our concerns that IRS’ effort to modernize tax processing was jeopardized by persistent and pervasive management and technical weaknesses, and that ongoing efforts did not include milestones or provide enough evidence to conclude that weaknesses will soon be corrected. We also addressed analogous technical weaknesses in an electronic filing system project called Cyberfile which substantiated our concerns that IRS was continuing to risk millions of dollars in undisciplined systems development in fiscal year 1996. In addition, we identified physical security risks at the planned Cyberfile data center. The Department of the Treasury, in its May 1996 report to the Senate and House Appropriations Committees, provides a candid assessment of TSM progress and future redirection, and a description of ongoing and planned actions intended to respond to our recommendations to correct management and technical weaknesses. It finds that despite some qualified success, IRS has not made progress on TSM as planned because systems development efforts have taken longer than expected, cost more than originally estimated, and delivered less functionality than originally envisioned. It concludes that significant changes are needed in IRS’ management approach, and that it is beyond the scope of IRS’ current ability to develop and integrate TSM without expanded use of external expertise. The report notes that work has been done to rethink, scale back, and change the direction of TSM. Additional changes are still in progress with actions underway to restructure the management of TSM and expand the use of contractors. Agreeing that our July 1995 recommendations are valid, the report notes that more work has to be done to respond to our recommendations. It states that progress in IRS’ management and technical areas can only be achieved by institutionalizing improved practices and monitoring projects for conformance to mandated standards and practices. The report does not address the basic problem of continuing to invest hundreds of millions of dollars in TSM before the requisite management and technical disciplines are in place. Neither does it address the risk inherent in shifting hundreds of millions of dollars to additional contractual efforts when the evidence is clear that IRS does not have the disciplined processes in place to manage all of its current contractual efforts (e.g., Cyberfile) effectively. IRS has initiated a number of actions to address management and technical weaknesses that continue to impede successful systems modernization. However, ongoing efforts do not correct the weaknesses and do not provide enough evidence to determine when they will be corrected and what steps if any are being taken in the interim to mitigate the risks associated with ongoing TSM spending. IRS has identified increasing electronic filings as critical to achieving its modernization vision. We noted that IRS did not have a comprehensive business strategy to reach or exceed its electronic filing goal, which was 80 million electronic filings by 2001. IRS’ estimates and projections for individual and business returns suggested that, by 2001, as few as 39 million returns may be submitted electronically, less than half of IRS’ goal and only about 17 percent of all returns expected to be filed. We reported that IRS’ business strategy would not maximize electronic filings because it primarily targeted taxpayers who use a third party to prepare and/or transmit simple returns, are willing to pay a fee to file their returns electronically, and are expecting refunds. Focusing on this limited taxpaying population overlooked most taxpayers, including those who prepare their own tax returns using personal computers, have more complicated returns, owe tax balances, and/or are unwilling to pay a fee to a third party to file a return electronically. We concluded that, without a strategy that also targets these taxpayers, IRS would not meet its electronic filing goals. In addition, if, in the future, taxpayers file more paper returns than IRS expects, added stress will be placed on IRS’ paper-based systems. Accordingly, we recommended that IRS refocus its electronic filing business strategy to target, through aggressive marketing and education, those sectors of the taxpaying population that can file electronically most cost-beneficially. IRS agreed with this recommendation and said that it had convened a working group to develop a detailed, comprehensive strategy to broaden public access to electronic filing, while also providing more incentives for practitioners and the public to file electronically. It said that the strategy would include approaches for taxpayers who are unwilling to pay for tax preparer and transmitter services, who owe IRS for balances due, and/or who file complex tax returns. IRS said further that the strategy would address that segment of the taxpaying population that would prefer to file from home, using personal computers. To date, IRS has performed an electronic filing marketing analysis at local levels; developed a marketing plan to promote electronic filing; consolidated 21 electronic filing initiatives into its Electronic Filing Strategies portfolio; and initiated a reengineering project with a goal to reduce paper tax return filings to 20 percent or less of the total volume by the year 2000. It plans to complete its electronic filing strategy in August 1996. These initiatives could result in future progress toward increasing electronic filings. However, our review found that these initiatives are not far enough along to determine whether they will culminate in a comprehensive strategy that identifies how IRS plans to target those sectors of the taxpaying population that can file electronically most cost-beneficially. It also is not clear how the reengineering project will impact the strategy or how these initiatives will impact TSM systems that are being developed. We reported that IRS did not have strategic information management practices in place. We found, for example, that, despite the billions of dollars at stake, information systems were not managed as investments. To overcome this, and provide the Congress with insight needed to assess IRS’ priorities and rationalization for TSM projects, we recommended that the IRS Commissioner take immediate action to implement a complete process for selecting, prioritizing, controlling, and evaluating the progress and performance of all major information systems investments, both new and ongoing, including explicit decision criteria, and using these criteria, to review all planned and ongoing systems investments by June 30, 1995. In agreeing with these recommendations, IRS said it would take a number of actions to provide the underpinning it needs for strategic information management. IRS said, for example, that it was developing and implementing a process to select, prioritize, control, and evaluate information technology investments to achieve reengineered program missions. Our assessment found that IRS has taken steps towards putting into place a process for managing its extensive investments in information systems. Following are examples of these steps. IRS created the executive-level Investment Review Board, chaired by the Associate Commissioner for Modernization, for selecting, controlling and evaluating all of IRS’ information technology investments. IRS developed initial and revised sets of decision criteria used last summer and again in November 1995 as part of its Resource Allocation and Investment Review to make additional changes in information technology resource allocations for remaining fiscal year 1996 funds and planned fiscal year 1997 spending. This review included only TSM projects under development. It did not address operational systems, infrastructure, or management and technical support activities. The Treasury Department created a Modernization Management Board to review and validate high-risk, high-cost TSM investments and to set policy and strategy for IRS modernization effort. IRS is considering the use of a “project readiness review” as an additional Investment Review Board control mechanism for gauging project readiness to proceed with spending. IRS developed the Business Case Handbook that includes decision criteria on costs, benefits, and risks. It is reassessing the business cases, which were developed on the TSM projects, using the handbook. Eleven cases are scheduled for completion in June 1996, and IRS plans to have the remaining cases completed by September 1996. Results are planned to be presented to the Investment Review Board to assist in making funding decisions for fiscal year 1997. IRS has developed the Investment Evaluation Review Handbook designed to assess projected costs and benefits against actual results. The handbook has been used on four TSM projects and five additional reviews are scheduled to be completed within the next year. The completed reviews contain explicit descriptions of problems encountered in developing these systems. The reviews make specific recommendations for management and technical process changes to improve future results. Specific recommendations pertain to strengthening project direction and decision-making. Many reflect concerns that we have raised in past reviews. The investment evaluation reviews were presented to the Investment Review Board and disseminated to other IRS managers. IRS is defining roles, responsibilities, and processes for incorporating Investment Evaluation Review recommendations at the project and process levels. These are positive steps and indicate a willingness to address many of the weaknesses raised in our past reports and testimonies. But, as noted in Treasury’s report on TSM, the investment process is not yet complete. According to Treasury, it is missing (1) specific operating procedures, (2) defined reporting relationships between different management boards and committees, and (3) updated business cases for major TSM technology investments. These concerns coincide with two central criticisms we have repeatedly made about TSM. Because of the sheer size, scope, and complexity of TSM, it is imperative that IRS institutionalize a repeatable process for selecting, controlling, and evaluating its technology investments, and that it make informed investment decisions based on reliable qualitative and quantitative assessments of costs, benefits, and risks. Although IRS is planning and in the initial stages of implementing parts of such a process, a complete, fully-integrated process does not yet exist. Specifically, IRS has not provided us evidence to justify its claims that its decisions were supported by acceptable data on project costs, benefits, and risks. For example: Our review found no evidence to suggest that IRS established minimal data requirements for the decisions made as part of the TSM Resource Allocation and Investment Review or the rescope process in December 1995. For example, because IRS lacks the basic capabilities for disciplined software development, it cannot convincingly estimate systems development costs, schedule, or performance. Subsequent to its rescope analysis, IRS developed minimal data quality requirements for cost-benefit and risk studies, proposed return on investment calculations, and return on investment thresholds, or comparisons of expected performance improvements with results to date. However, to date, few, if any projects have met these criteria. In deciding whether to accelerate, delay, or cancel specific TSM projects, IRS did not use validated data on actual versus projected costs, benefits, or risks as set forth by the Office of Management and Budget (OMB). Instead, IRS continues to make its decisions based on spending whatever budgeted funding ceiling amounts can be obtained through its annual budget and appropriations cycles. As a result, IRS cannot convincingly justify its TSM spending decisions. All projects (i.e., proposed projects, projects under development, operational systems, infrastructure, and management and technical support activities) were not included in a single systems investment portfolio. Instead, only TSM projects under development were ranked. As a result, there is no compelling rationale for determining how much to invest in these projects compared to other projects, such as operational systems, infrastructure, etc. There is no defined process with prescribed roles and responsibilities to ensure that the results of investment evaluation reviews are being used to (1) modify project direction and funding when appropriate and (2) assess and improve existing investment selection and control processes and procedures. As a result, there is no evidence that changes are occurring based on the valuable lessons learned as in the recently completed post implementation review of the Service Center Recognition/Image Processing System. For example, IRS found that because system requirements were not adequately defined or documented, the system could not be quantifiably tested properly which adversely affected the implementation of the system. Moreover, with only four investment evaluation reviews completed to date and five planned for the upcoming year, this represents only a small fraction of the total IRS annual investment in TSM. More must be done to confirm actual results achieved from TSM expenditures. We noted in our July 1995 report that IRS’ reengineering efforts were not linked to its systems development efforts. As shown in our work with leading organizations, information system development projects that are not driven by a critical reexamination and redesign of business processes achieve only a fraction of their potential to improve performance, reduce costs, and enhance quality. Since our July report, IRS’ reengineering efforts have undergone a redirection. Three reengineering projects—processing returns, responding to taxpayers, and enforcement actions—were halted because IRS decided to focus instead on an enterprise-level view of reengineering. Its new effort, entitled Tax Settlement Reengineering, was begun in March 1996 and involves a comprehensive review of all the major processes and activities that enable taxpayers to settle their tax obligations, from educational activities through final settlement of accounts. The reengineering project team, working with IRS’ Executive Committee, has identified 16 major processes involved in tax settlement and is about to begin reengineering four of them. High-level designs of the new processes are scheduled to be defined by September 30, 1996, with work on detailed designs to start early in fiscal year 1997, if approved by the Executive Committee. Reengineering efforts on as many as eight other tax settlement processes could be underway by the end of fiscal year 1997. Although this effort could have substantial impact, IRS still faces the same problem we reported on a year ago. Reengineering lags well behind the development of TSM projects, whereas it should be ahead of it—defining and directing the technology investments needed to support new, more efficient business processes. Until the reengineering effort is mature enough to drive TSM projects, there is no assurance that ongoing systems development efforts will support IRS’ future business needs and objectives. The reengineering team believes that by September 1996 they will have a general idea of how the first four tax settlement reengineering projects may impact current system development efforts. If additional reengineering projects are started as planned in 1997, it could be another year or more before most of the information and systems requirements stemming from these projects are defined. Meanwhile, investment continues in many TSM projects that may not support the requirements resulting from these reengineering efforts. IRS acknowledges that integration of reengineering and TSM must occur, and has assigned responsibility for it to the Associate Commissioner for Modernization, but has not yet specified how or when the requisite integration will occur. We reported that unless IRS improves its software development capability, it is unlikely to build TSM timely or economically, and systems are unlikely to perform as intended. To assess its software capability, in September 1993, IRS rated itself using the Software Engineering Institute’s CMM. IRS placed its software development capability at the lowest level, described as ad hoc and sometimes chaotic and indicating significant weaknesses in its software development capability. Our review confirmed that IRS’ software development capability was immature and was weak in key process areas. For instance, a disciplined process to manage system requirements was not being applied to TSM systems, a software tool for planning and tracking development projects was not software quality assurance functions were not well defined or consistently systems and acceptance testing were neither well defined nor required, software configuration management was incomplete. To address IRS’ software development weaknesses and upgrade IRS’ software development capabilities, we recommended that the IRS Commissioner immediately require that all future contractors who develop software for the agency have a software development capability rating of at least CMM Level 2, and before December 31, 1995, define, implement, and enforce a consistent set of requirements management procedures for all TSM projects that goes beyond IRS’ current request for information services process, and for software quality assurance, software configuration management, and project planning and tracking; and define and implement a set of software development metrics to measure software attributes related to business goals. IRS agreed with these recommendations and said that it was committed to developing consistent procedures addressing requirements management, software quality assurance, software configuration management, and project planning and tracking. It also said that it was developing a comprehensive measurement plan to link process outputs to external requirements, corporate goals, and recognized industry standards. Specifically regarding the first recommendation, IRS has (1) developed standard wording for use in new and existing contracts that have a significant software development component, requiring that all software development be done by an organization that is at CMM Level 2, (2) developed a plan for achieving CMM Level 2 capability on all of its contracts, and (3) started to implement a plan to monitor contractors’ capabilities, which may include the use of CMM-based software capability evaluations. The Department of the Treasury report also noted that a schedule for conducting software capability evaluations was developed. However, we found that IRS does not yet have the disciplined processes in place to ensure that all contractors are performing at CMM Level 2. For example, contractors developing the Cyberfile electronic filing system were not using CMM Level 2 processes, subsequent to our July 1995 recommendation. Further, no schedule for conducting software capability evaluations has yet been developed. With respect to the second recommendation, IRS is updating its systems life cycle (SLC) methodology. The SLC is planned to have details for systems engineering and software development processes, including all CMM key process areas. IRS has updated its systems engineering process to include guidance for defining and analyzing systems requirements and for preparing work packages. Furthermore, IRS has drafted handbooks providing guidance to audit and verify developmental processes. In addition, IRS has developed a configuration management plan template, updated its requirements management request for information servicesdocument, and developed and implemented a requirements management course. The Department of the Treasury also reported that IRS is testing the SLC on two TSM efforts, Integrated Case Processing (ICP) and Corporate Accounts Processing System (CAPS). IRS also has a CMM process improvement plan and work is being done across various IRS organizations to define processes to meet CMM Level 2. Finally, IRS is assessing its capabilities to manage contractors using the CMM goals. However, the procedures for requirements management, software quality assurance, software configuration management, and project planning and tracking are still not complete. A software development life cycle implementation project, which is to include these procedures, is not scheduled for completion until September 30, 1996. In addition, software quality assurance and configuration management plans for two ICP projects were not being used, and the groups developing software for CAPS do not have a software configuration management plan or a schedule for its development. Furthermore, ICP and CAPS development is continuing without the guidelines and procedures for other process areas (e.g., requirements management, project planning, and project tracking and oversight) required by CMM Level 2. Regarding the third recommendation, IRS has a three-phase process to (1) identify data sources for metrics, (2) define metrics to be used, and (3) implement the metrics. A partial set of metrics is currently being identified. Initial use of these metrics—populated with real data and in a preliminary format—is scheduled for use on a set of identified projects beginning on June 30, 1996. Data sources for these metrics have been identified and weaknesses (such as difficulties in retrieving the data and inconsistencies in the data) are being documented to provide feedback to various systems’ owners. However, this initial set of metrics is incomplete. It focuses on areas such as time reporting, project sizing, and defect tracing and analysis, but does not include measures for determining customer satisfaction and cost estimation. Such measures are needed to adequately track the needed functionality with associated costs throughout systems development. Further, there is no schedule for completing the definition of metrics or for institutionalizing the processes needed to ensure their use. Finally, there is no mechanism in place to correct identified data and data collection weaknesses. In summary, although IRS has begun to act on our recommendations, these actions are not yet complete or institutionalized, and, as a result, systems are still being developed without the disciplined practices and metrics needed to give management assurance that they will perform as intended. We reported that IRS’ systems architectures, integration planning, and system testing and test planning were incomplete. To address IRS’ technical infrastructure weaknesses, we recommended that the IRS Commissioner before December 31, 1995, complete an integrated systems architecture, including security, telecommunications, network management, and data management; institutionalize formal configuration management for all newly approved projects and upgrades and develop a plan to bring ongoing projects under formal configuration management; develop security concept of operations, disaster recovery, and contingency plans for the modernization vision and ensure that these requirements are addressed when developing information system projects; develop a testing and evaluation master plan for the modernization; establish an integration testing and control facility; and complete the modernization integration plan and ensure that projects are monitored for compliance with modernization architectures. IRS agreed with these recommendations and said that it was identifying the necessary actions to define and enforce systems development standards and architectures agencywide. IRS’ current efforts in this area follow. In April 1996, IRS completed a descriptive overview of its integrated three-tier, distributed systems architecture to provide management with a high-level view of TSM’s infrastructure and supporting systems. IRS has tasked the integration support contractor to develop the data and security architectures. IRS has adopted an accepted industry standard for configuration management. It developed and distributed its Configuration Management Plan template, which identifies the elements needed when constructing a configuration management plan. In April 1996, enterprisewide configuration management policies and procedures were established. IRS also plans to obtain contractor support to develop, implement, and maintain a vigorous configuration management program. IRS has prepared a security concept of operations and a disaster recovery and contingency plan. IRS has developed a test and evaluation master plan for TSM. IRS plans to develop implementation and enforcement policies for the plan. IRS has established an interim integration testing and control facility, which is currently being used to test new software releases. It is also planning a permanent integration testing and control facility, scheduled to be completed by December 1996. IRS has completed drafts of its TSM Release Definition Document, which is planned to provide definitions for new versions of TSM software from 1997 to 1999, and Modernization Integration Plan, which is planned to define IRS’ process for integrating current and future TSM initiatives. centers need to take to absorb the workload of a center that suffers a disaster. The test and evaluation master plan provides the guidance needed to ensure sufficient developmental and operational testing of TSM. However, it does not describe what security testing should be performed, or how these tests should be conducted. Further, it does not specify the responsibilities and processes for documenting, monitoring, and correcting testing and integration errors. IRS is still working on plans for its integration testing and control facility. In the interim, it has established a temporary facility which is being used for limited testing. The permanent facility is not currently being planned to simulate the complete production environment, and will not, for example, include mainframe computers. Instead, IRS plans to continue to test mainframe computer software and systems which interface with the mainframes in its production environment. To ensure that IRS does not put operations and service to taxpayers at risk, IRS should prepare a thorough assessment of its solution, including an analysis of alternative testing approaches and their costs, benefits, and risks. IRS’ draft TSM Release Definition Document and draft Modernization Integration Plan (1) do not reflect TSM rescoping and the information systems reorganization under the Associate Commissioner, (2) do not provide clear and concise links to other key documents (e.g., its integrated systems architecture, business master plan, concept of operations, and budget), and (3) assume that IRS has critical processes in place that are not implemented (e.g., effective quality assurance and disciplined configuration management). In summary, although IRS has taken actions to prepare a systems architecture and improve its integration and system testing and test planning, these efforts are not yet complete or institutionalized, and, as a result, TSM systems continue to be developed without the detailed architectures and discipline needed to ensure success. We reported that IRS had not established an effective organizational structure to consistently manage and control systems modernization organizationwide. The accountability and responsibility for IRS’ systems development was spread among IRS’ Modernization Executive, Chief Information Officer, and research and development division. To help address this concern, in May 1995, the Modernization Executive was named Associate Commissioner. The Associate Commissioner was to manage and control systems development efforts previously conducted by the Modernization Executive and the Chief Information Officer. In September 1995, the Associate Commissioner for Modernization assumed responsibility for the formulation, allocation, and management of all information systems resources for both TSM and non-TSM expenditures. In February 1996, IRS issued a Memorandum of Understanding providing guidance for initiating and conducting technology research and for transitioning technology research initiatives into system development projects. It is important that IRS maintain an organizationwide focus to manage and control all new modernization systems and all upgrades and replacements of operational systems throughout IRS. To do so, we recommended that the IRS Commissioner give the Associate Commissioner management and control responsibility for all systems development activities, including those of IRS’ research and development division. Steps are being taken by the Associate Commissioner to establish effective management and control of systems development activities throughout IRS. For example, its SLC methodology is required for information systems development, and information technology entities throughout the agency have been directed to submit documentation on all information technology projects for review. However, there is no defined and effective mechanism for enforcing the standards or ensuring that organizational entities cannot conduct systems development activities outside the control of the Associate Commissioner. Further, no timeframes have been established for defining and implementing such control mechanisms. As a result, systems development conducted by the research and development division has now been redefined as technology research, keeping it from the control of the Associate Commissioner. In summary, although improvements have been made in consolidating management control over systems development, the Associate Commissioner still does not yet have control over all IRS’ systems development activities. IRS plans to increase its reliance on the private sector by (1) preparing an acquisition plan and statement of work to conduct an expedited competitive selection for a prime development and integration contractor; (2) transferring responsibility for systems engineering, design, prototyping, and integration for core elements of TSM to its integration support contractor; and (3) making greater use of software development contractors, including those available under the Treasury Information Processing Support Services (TIPSS), to develop and deliver major elements of production TSM systems. By increasing its reliance on contractors, IRS expects to improve the accountability for and probability of TSM success. IRS plans to increase the use of private-sector integration and development expertise by expanding the use of contractors to support TSM. It outlined a three-track approach for transitioning over a period of 2 years to the use of a prime contractor that would have, according to IRS, overall authority and responsibility for the development, delivery, and deployment of modernized information systems. To facilitate this strategy, IRS reported it would consolidate the management of all TSM resources, including key TSM contractors, in its Government Program Management Office (GPMO). Under the direct control of the Chief Information Officer, GPMO will be delegated authority for the management and control of the IRS staff and contractors that plan, design, develop, test, and implement TSM components. IRS plans to have GPMO fully staffed and operational by October 1, 1996. IRS representatives told us the agency was currently developing a detailed contract management plan and a statement of work for acquiring its prime contractor, and believed it could award a contract in about 2 years. IRS’ approach to expanding the use of contractors to build TSM is still in the early planning stages. Because of this, IRS was unable to provide us with formal plans, charters, schedules, or the definitions of shared responsibilities between GPMO and the existing program and project management staff. At this point, it is unclear what these IRS planned actions entail, or how they will work. For example, IRS has not specified how and when it plans to transfer its development activities to contractors, and to what extent contractors could be held responsible for existing problems in these government-initiated systems. This is particularly important because if IRS continues as planned, the principal TSM systems will be in development and/or deployed before IRS plans to select a prime contractor in about 2 years. Moreover, it is not clear how the prime contractor would direct potential competitors that are already under contract with IRS. Without further explanation of and a schedule for transitioning specific responsibilities from IRS to contractors, we cannot fully understand or assess IRS’ plans. Further, plans to use additional contractors will succeed if, and only if, IRS has the in-house capabilities to manage these contractors effectively. In this regard, there is clear evidence that IRS’ capability to manage contractors has weaknesses. In August 1995, IRS acquired the services of the Department of Commerce’s National Technical Information Service (NTIS) to act as IRS’ prime contractor in developing Cyberfile. However, Cyberfile was not developed using disciplined management and technical practices. As a result, this project exhibited many of the same problems we have repeatedly identified in other TSM systems, and, after providing $17 million to NTIS, it was not ready for planned testing during the 1996 tax filing season. Similarly, IRS contracted in 1994 to build the Document Processing System. After expending over a quarter of a billion dollars on the project, IRS has now suspended the effort and is reexamining some of its basic requirements, including which and how many forms should be processed, and which and how much data should be read from the documents. We recently initiated an assignment to evaluate in detail IRS’ software acquisition capabilities using the Software Engineering Institute’s Software Acquisition CMM. This assignment is scheduled to be completed later this year. It is clear that unless IRS has mature, disciplined processes for acquiring software systems through contractors, it will be no more successful in buying software than it has been in building software. IRS has initiated a number of actions and is making some progress in addressing our recommendations to correct its pervasive management and technical weaknesses. However, none of these actions, either individually or in the aggregate, fully satisfy any of our July 1995 recommendations and it is not clear when these actions will result in disciplined systems development. As a result, IRS continues to spend hundreds of millions of dollars on TSM through fiscal year 1997, while fundamental weaknesses jeopardize the investment. Recognizing its internal weaknesses, IRS plans to use a prime contractor and increase use of software development contractors to develop TSM. However, in this area, its plans and schedules are not well defined, and, therefore, cannot be completely understood or assessed. Further, as the experience with Cyberfile and the Document Processing System projects makes clear, IRS does not have the mature processes needed to acquire software and manage contractors effectively. Because IRS still does not have (1) effective strategic information management practices needed to manage TSM as an investment, (2) mature and disciplined software development processes needed to assure that systems built will perform as intended, (3) a completed systems architecture that is detailed enough to guide and control systems development, and (4) a schedule for accomplishing any of the above, the Congress could consider limiting TSM spending to only cost-effective modernization efforts that (1) support ongoing operations and maintenance, (2) correct IRS’ pervasive management and technical weaknesses, (3) are small, represent low technical risk, and can be delivered in a relatively short time frame, and (4) involve deploying already developed systems, only if these systems have been fully tested, are not premature given the lack of a completed architecture, and produce a proven, verifiable business value. As the Congress gains confidence in IRS’ ability to successfully develop these smaller, cheaper, quicker projects, it could consider approving larger, more complex, more expensive projects in future years. Because IRS does not manage all of its current contractual efforts effectively, and because its plans to use a “prime” contractor and transition much of its systems development to additional contractors are not well defined, the Congress could consider requiring that IRS institute disciplined systems acquisitions processes and develop detailed plans and schedules before permitting IRS to increase its reliance on contractors. On June 6, 1996, we met with Treasury and IRS officials to discuss a draft of this report and we incorporated their comments as appropriate in finalizing it. In addition, on June 6, 1996, we received written comments from Treasury. In his letter, the Deputy Secretary of the Treasury reiterates Treasury’s commitment to significantly increased oversight of TSM and to making a sharp turn in the way TSM is managed. He also makes clear Treasury’s and IRS’ understanding that additional improvements are necessary to fully correct the management and technical weaknesses delineated in our report. The Deputy Secretary of the Treasury also says that he is reducing the fiscal year 1997 budget request for TSM from $850 million to $664 million and will need to ensure, at all times, solid stewardship for the dollars appropriated and clear accountability for the investments undertaken. Achieving sound management for the TSM program will require that IRS (1) institutionalize effective strategic information management practices, (2) institutionalize mature and disciplined software development processes, and (3) complete systems, data, and security architectures and use them to guide and control systems development, before making major investments in TSM systems development. Until these disciplined processes are in place and the requisite architectures completed, the Congress could consider limiting IRS TSM spending to only cost-effective modernization efforts that meet the criteria outlined in our Matters for Congressional Consideration. We are sending copies of this report to the Chairmen and the Ranking Minority Members of (1) the Senate and House Committees on the Budget, (2) the Subcommittee on Taxation and IRS Oversight, Senate Committee on Finance, (3) the Senate Committee on Governmental Affairs, (4) the Subcommittee on Oversight, House Committee on Ways and Means, and (5) the House Committee on Government Reform and Oversight. We are also sending copies to the Secretary of the Treasury, Commissioner of the Internal Revenue Service, and Director of the Office of Management and Budget. Copies will be available to others upon request. This work was performed under the direction of Dr. Rona B. Stillman, Chief Scientist for Computers and Telecommunications, who can be reached at (202) 512-6412. Other major contributors are listed in appendix II. Sherrie Russ, Senior Evaluator Christopher E. Hess, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the Internal Revenue Service's (IRS) actions to correct GAO-identified management and technical weaknesses that jeopardize its tax systems modernization (TSM) efforts. GAO found that: (1) IRS does not have a comprehensive strategy to maximize electronic filing because the present strategy targets only a small portion of the taxpayers likely to file electronically; (2) IRS strategic information management practices remain ineffective because information systems are not managed as investments; (3) IRS reengineering efforts lag behind the development of TSM projects; (4) IRS is improving its software development activities, but these improvements are not complete or institutionalized; (5) the IRS technical infrastructure, including systems architecture, integration planning, and system testing and test planning, is incomplete; (6) IRS has not established an effective organizational structure to consistently manage and control TSM; and (7) IRS plans to increase its use of contractors to facilitate TSM, but it has not been successful in managing all of its contractors. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal-aid highway program provides nearly $30 billion annually to the states, most of which are formula grant funds that FHWA distributes through annual apportionments according to statutory formulas; once apportioned, these funds are generally available to each state for eligible projects. The responsibility for choosing which projects to fund generally rests with state departments of transportation and local planning organizations. The states have considerable discretion in selecting specific highway projects and in determining how to allocate available federal funds among the various projects they have selected. For example, section 145 of title 23 of the United States Code describes the federal-aid highway program as a federally assisted state program and provides that the authorization of the appropriation of federal funds or their availability for expenditure, “shall in no way infringe on the sovereign rights of the States to determine which projects shall be federally financed.” A major highway or bridge construction or repair project usually has four stages: (1) planning, (2) environmental review, (3) design and property acquisition, and (4) construction. While FHWA approves state transportation plans, environmental impact assessments, and the acquisition of property for highway projects, its role in approving the design and construction of projects varies. The state’s activities and FHWA’s corresponding approval actions are shown in figure 1. Given the size and significance of the federal-aid highway program’s funding and projects, a key challenge for this program is overseeing states’ expenditure of public funds to ensure that state projects are well managed and successfully financed. Our work—as well as work by the DOT Inspector General and by state audit and evaluation agencies—has documented cost growth on numerous major highway and bridge projects. Let me provide one example. In January 2001, Virginia’s Joint Legislative Audit and Review Commission found that final project costs on Virginia Department of Transportation projects were well above their cost estimates and estimated that the state’s 6-year, $9 billion transportation development plan understated the costs of projects by up to $3.5 billion. The commission attributed these problems to several factors, including, among other things, not adjusting estimates for inflation and expanding the scope of projects. Our work has identified weaknesses in FHWA’s oversight of projects, especially in controlling costs. In 1997, we reported that cost containment was not an explicit statutory or regulatory goal of FHWA’s oversight. While FHWA influenced the cost-effectiveness of projects when it reviewed and approved plans for their design and construction, we found it had done little to ensure that cost containment was an integral part of the states’ project management. According to FHWA officials, controlling costs was not a goal of their oversight, and FHWA had no mandate in law to encourage or require practices to contain the costs of major highway projects. More recently, an FHWA task force concluded that changes in the agency’s oversight role since 1991—when the states assumed greater responsibility for overseeing federal-aid projects—had resulted in conflicting interpretations of the agency’s role in overseeing projects, and that some of the field offices were taking a “hands off” approach to certain projects. In June 2001, FHWA issued a policy memorandum, in part to clarify that FHWA is ultimately accountable for all projects financed with federal funds. As recently as last month, a memorandum posted on FHWA’s Web site discussed the laws establishing FHWA and the federal- aid highway program, along with congressional and public expectations that FHWA “ensure the validity of project cost estimates and schedules.” The memorandum concluded, “These expectations may not be in full agreement with the role that has been established by these laws.” In addition, we have found that FHWA’s oversight process has not promoted reliable cost estimates. While there are many reasons for cost increases, we have found, on projects we have reviewed, that initial cost estimates were not reliable predictors of the total costs and financing needs of projects. Rather, these estimates were generally developed for the environmental review—whose purpose is to compare project alternatives, not to develop reliable cost estimates. In addition, FHWA had no standard requirements for preparing cost estimates, and each state used its own methods and included different types of costs in its estimates. We have also found that costs exceeded initial estimates on projects we have reviewed because (1) initial estimates were modified to reflect more detailed plans and specifications as projects were designed and (2) the projects’ costs were affected by, among other things, inflation and changes in scope to accommodate economic development over time. We also found that highway projects take a long time to complete, and that the amount of time spent on them is of concern to the Congress, the federal government, and the states. Completing a major, new, federally funded highway project that has significant environmental impacts typically takes from 9 to 19 years and can entail as many as 200 major steps requiring actions, approvals, or input from a number of federal, state, and other stakeholders. Finally, we have noted that in many instances, states construct a major project as a series of smaller projects, and FHWA approves the estimated cost of each smaller project when it is ready for construction, rather than agreeing to the total cost of the major project at the outset. In some instances, by the time FHWA considers whether to approve the cost of a major project, a public investment decision may, in effect, already have been made because substantial funds have been spent on designing the project and acquiring property, and many of the increases in the project’s estimated costs have already occurred. Since 1998, FHWA has taken a number of steps to improve the management and oversight of major projects in order to better promote cost containment. For example, FHWA implemented TEA-21’s requirement that states develop an annual finance plan for any highway or bridge project estimated to cost $1 billion or more and established a major projects team that currently tracks and reports each month on 15 such projects. FHWA has also moved to incorporate greater risk-based management into its oversight in order to identify areas of weakness within state transportation programs, set priorities for improvement, and work with the states to meet those priorities. The administration’s May 2001 reauthorization measure contains additional proposed actions. It would introduce more structured FHWA oversight requirements, including mandatory annual reviews of state transportation agencies’ financial management and “project delivery” systems, as well as periodic reviews of states’ practices for estimating costs, awarding contracts, and reducing project costs. To improve the quality and reliability of cost estimates, it would introduce minimum federal standards for states to use in estimating project costs. The measure would also strengthen reporting requirements and take new actions to reduce fraud. Many elements of the administration’s proposal are responsive to problems and options we have described in past reports and testimony. Should the Congress determine that enhancing federal oversight of major highway and bridge projects is needed and appropriate, options we have identified in prior work remain available to build on the administration’s proposal during the reauthorization process. However, adopting any of these options would require balancing the states’ right to select projects and desire for flexibility and more autonomy with the federal government’s interest in ensuring that billions of federal dollars are spent efficiently and effectively. Furthermore, the additional costs of each of these options would need to be weighed against its potential benefits. Options include the following: Have FHWA develop and maintain a management information system on the cost performance of selected major highway and bridge projects, including changes in estimated costs over time and the reasons for such changes. Such information could help define the scope of the problem with major projects and provide insights needed to fashion appropriate solutions. Clarify uncertainties concerning FHWA’s role and authority. As I mentioned earlier, the federal-aid highway program is by law a federally assisted state program, and FHWA continues to question its authority to encourage or require practices to contain the costs of major highway and bridge projects. Should uncertainties about FHWA’s role and authority continue, another option would be to resolve the uncertainties through reauthorization language. Have the states track the progress of projects against their initial baseline cost estimates. The Office of Management and Budget requires federal agencies, for acquisitions of major capital assets, to prepare baseline cost and schedule estimates and to track and report the acquisitions’ cost performance. These requirements apply to programs managed by and acquisitions made by federal agencies, but they do not apply to the federal- aid highway program, a federally assisted state program. Expanding the federal government’s practice to the federally assisted highway program could improve the management of major projects by providing managers with information for identifying and addressing problems early. Establish performance goals and strategies for containing costs as projects move through their design and construction phases. Such performance goals could provide financial or other incentives to the states for meeting agreed-upon goals. Performance provisions such as these have been established in other federally assisted grant programs and have also been proposed for use in the federal-aid highway program. Requiring or encouraging the use of goals and strategies could also improve accountability and make cost containment an integral part of how states manage projects over time. Consider methods for improving the time it takes to plan and construct major federal-aid highway projects—a process that we reported can take up to 19 years to complete. Major stakeholders suggested several approaches to improving the timeliness of these projects, including (1) improving project management, (2) delegating environmental review and permitting authority, and (3) improving agency staffing and skills. We have recommended that FHWA consider the benefits of the most promising approaches and act to foster the adoption of the most cost-effective and feasible approaches. Reexamine the approval process for major highway and bridge projects. This option, which would require federal approval of a major project at the outset, including its cost estimate and finance plan, would be the most far- reaching and the most difficult option to implement. Potential models for such a process include the full funding grant agreement used by FTA for the New Starts program, and, as I testified last year, a DOT task force’s December 2000 recommendation calling for the establishment of a separate funding category for initial design work and a new decision point for advancing highway projects. Over the last 25 years, more than 1.2 million people have died as a result of traffic crashes in the United States—more than 42,000 in 2002. Since 1982, about 40 percent of traffic deaths were from alcohol-related crashes. In addition, traffic crashes are the leading cause of death for people aged 4 though 33. As figure 2 shows, the total number of traffic fatalities has not significantly decreased in recent years. To improve safety on the nation’s highways, NHTSA administers a number of programs, including the core federally funded highway safety program, Section 402 State and Community Grants, and several other highway safety programs that were authorized in 1998 by TEA-21. The Section 402 program, established in 1966, makes grants available for each state, based on a population and road mileage formula, to carry out traffic safety programs designed to influence drivers’ behavior, commonly called behavioral safety programs. The TEA-21 programs include seven incentive programs, which are designed to reduce traffic deaths and injuries by promoting seatbelt use and reducing alcohol-impaired driving, and two transfer programs, which penalize states that have not complied with federal requirements for enacting repeat-offender and open container laws to limit alcohol-impaired driving. Under these transfer programs, noncompliant states are required to shift certain funds from federal-aid highway programs to projects that concern or improve highway safety. In addition, subsequent to TEA-21, the Congress required that, starting later this year, states that do not meet federal requirements for establishing 0.08 blood alcohol content as the state level for drunk driving will have a percentage of their federal aid highway funds withheld. During fiscal years 1998 through 2002, over $2 billion was provided to the states for highway safety programs. NHTSA, which oversees the states’ highway safety programs, adopted a performance-based approach to oversight in 1998. Under this approach, the states and the federal government are to work together to make the nation’s highways safer. Each state sets its own safety performance goals and develops an annual safety plan that describes projects designed to achieve the goals. NHTSA’s 10 regional offices review the states’ annual plans and provide technical assistance, advice, and comments. NHTSA has two tools available to strengthen its monitoring and oversight of the state programs—improvement plans that states not making progress towards their highway safety goals are to develop, which identify programs and activities that a state and NHTSA regional office will undertake to help the state meet its goals; and management reviews, which generally involve sending a team to a state to review its highway safety operations, examine its projects, and determine that it is using funds in accordance with requirements. Among the key challenges in this area are (1) evaluating how well the federally funded state highway safety programs are meeting their goals and (2) determining how well the states are spending and controlling their federal highway safety funds. In April 2003, we issued a report on NHTSA’s oversight of state highway safety programs in which we identified weaknesses in NHTSA’s use of improvement plans and management reviews. Evaluating how well state highway safety programs are meeting their goals is difficult because, under NHTSA’s performance-based oversight approach, NHTSA’s guidance does not establish a consistent means of measuring progress. Although the guidance states that NHTSA can require the development and implementation of an improvement plan when a state fails to make progress toward its highway safety performance goals, the guidance does not establish specific criteria for evaluating progress. Rather, the guidance simply states that an improvement plan should be developed when a state is making little or no progress toward its highway safety goals. As a result, NHTSA’s regional offices have made limited and inconsistent use of improvement plans, and some states do not have improvement plans, even though their alcohol-related fatality rates have increased or their seat-belt usage rates have declined. Without a consistent means of measuring progress, NHTSA and state officials lack common expectations about how to define progress, how long states should have to demonstrate progress, how to set and measure highway safety goals, and when improvement plans should be used to help states meet their highway safety goals. To determine how well the states are spending and controlling their federal highway safety funds, NHTSA’s regional offices can conduct management reviews of state highway safety programs. Management reviews completed in 2001 and 2002 identified weaknesses in states’ highway safety programs that needed correction; however, we found that the regional offices were inconsistent in conducting the reviews because NHTSA’s guidance does not specify when the reviews should be conducted. The identified weaknesses included problems with monitoring subgrantees, poor coordination of programs, financial control problems, and large unexpended fund balances. Such weaknesses, if not addressed, could lead to inefficient or unauthorized uses of federal funds. According to NHTSA officials, management reviews also foster productive relationships with the states that allow the agency’s regional offices to work with the states to correct vulnerabilities. These regions’ ongoing involvement with the states also creates opportunities for sharing and encouraging the implementation of best practices, which may then lead to more effective safety programs and projects. To encourage more consistent use of improvement plans and management reviews, we made recommendations to improve the guidance to NHTSA’s regional offices on when it is appropriate to use these oversight tools. In commenting on a draft of the report, NHTSA officials agreed with our recommendations and said they had begun taking action to develop criteria and guidance for using the tools. The administration’s recent proposal to reauthorize TEA-21 would make some changes to the safety programs that could also have some impact on program efficiencies. For example, the proposal would somewhat simplify the current grant structure for NHTSA’s highway safety programs. The Section 402 program would have four components: core program formula grants, safety belt performance grants, general performance grants, and impaired driving discretionary grants. The safety belt performance grants would provide funds to states that had passed primary safety belt laws or achieved 90 percent safety belt usage. In addition, the general performance grant would provide funds based on overall reductions in (1) motor vehicle fatalities, (2) alcohol-related fatalities, and (3) motorcycle, bicycle, and pedestrian fatalities. Finally, the Section 402 program would have an impaired driving discretionary grant component, which would target funds to up to 10 states that had the highest impaired driving fatality numbers or fatality rates. In addition to changing the Section 402 program, the proposal would expand grants for highway safety information systems and create new emergency medical service grants. The proposal leaves intact existing penalties related to open container, repeat offender, and 0.08 blood-alcohol content laws, and establishes a new transfer penalty for states that fail to pass a primary safety belt law and have safety belt use rates lower than 90 percent by 2005. The proposal would also give the states greater flexibility in using their highway safety funds. A state could move up to half its highway safety construction funds from the Highway Safety Improvement Program into the core Section 402 program. A state would also be able to use 100 percent of its safety belt performance grants for construction purposes if it had a primary safety belt law, or 50 percent if the grant was based on high safety belt use. States could also use up to 50 percent of their general performance grants for safety construction purposes. The New Starts transit program identifies and funds fixed guideway projects, including rail, bus rapid transit, trolley, and ferry projects. The New Starts program provides much of the federal government’s investment in urban mass transportation. TEA-21 and subsequent amendments authorized approximately $10 billion for New Starts projects for fiscal years 1998 through 2003. The administration’s proposal for the surface transportation reauthorization, known as the Safe, Accountable, Flexible, and Efficient Transportation Equity Act of 2003 (SAFETEA), requests that about $9.5 billion be made available for the New Starts program for fiscal years 2004 through 2009. Unlike the federal highway program and certain transit programs, under which funds are automatically distributed to states on the basis of formulas, the New Starts program requires local transit agencies to compete for New Starts project funds on the basis of specific financial and project justification criteria. To obtain New Starts funds, a project must progress through a regional review of alternatives, develop preliminary engineering plans, and meet FTA’s approval for final design. FTA assesses the technical merits of a project proposal and its finance plan and then notifies the Congress that it intends to commit New Starts funding to certain projects through full funding grant agreements. The agreement establishes the terms and conditions for federal participation in the project, including the maximum amount of federal funds—no more than 80 percent of the estimated net cost of the project. While the grant agreement commits the federal government to providing the federal contributions to the project over a number of years, these contributions are subject to the annual appropriations process. State or local sources provide the remaining funding. The grantee is responsible for all costs exceeding the federal share, unless the agreement is amended. To meet the nation’s transportation needs, many states and localities are planning or building large New Starts projects to replace aging infrastructure or build new capacity. They are often costly and require large commitments of public resources, which may take several years to obtain from federal, state, and local sources. The projects can also be technically challenging to construct and require their sponsors to resolve a wide range of social, environmental, land-use, and economic issues before and during construction. It is critical that federal and other transportation officials meet two particular challenges that stem from the costly and lengthy federal funding commitment associated with New Starts projects. First, they must have a sound basis for evaluating and selecting projects. Because many transit projects compete for limited federal transit dollars—there are currently 52 projects in the New Starts “pipeline”—and FTA awards relatively few full funding grant agreements each year, it is crucial that local governments choose the most promising projects as candidates for New Starts funds and that FTA uses a process that effectively selects those projects that most clearly meet the program’s goals. Second, FTA, like FHWA, has the challenge of overseeing the planning, development, and construction of selected projects to ensure they remain on schedule and within budget, and deliver their expected performance. In the early 1990s, we designated the transit grants management oversight program as high risk because it was vulnerable to fraud, waste, abuse, and mismanagement. While we have removed it from the high-risk designation because of improvements FTA has made to this program, we have found that major transit projects continue to experience costs and schedule problems. For example, in August, 1999, we reported that 6 of the 14 transit projects with full funding grant agreements had experienced cost increases, and 3 of those projects had experienced cost increases that were more than 25 percent over the estimates approved by FTA in grant agreements. The key reasons for the increases included (1) higher than anticipated contract costs, (2) schedule delays, and (3) project scope changes and system enhancements. A recent testimony by the Department of Transportation’s Inspector General indicates that major transit projects continue to experience significant problems including cost increases, financing problems, schedule delays, and technical or construction difficulties. FTA has developed strategies to address the twin challenges of selecting the right projects and monitoring their implementation costs, schedule, and performance. First, in response to direction in TEA-21, FTA developed a systematic process for evaluating and rating potential New Starts projects competing for federal funding. Under this process, FTA assigns individual ratings for a variety of financial and project justification criteria and then assigns an overall rating of highly recommended, recommended, not recommended, or not rated. These criteria reflect a broad range of benefits and effects of the proposed projects, including capital and operating finance plans, mobility improvements, environmental benefits, operating efficiencies, cost-effectiveness, land use, and other factors. According to FTA’s New Starts regulations, a project must have an overall rating of at least “recommended” to receive a grant agreement. FTA also considers a number of other “readiness” factors before proposing funding for a project. For example, FTA proposes funding only for projects that are expected to enter the final design phase and be ready for grant agreements within the next fiscal year. Figure 3 illustrates the New Starts evaluation and ratings process. While FTA has made substantial progress in establishing a systematic process for evaluating and rating potential projects, our work has raised some concerns about the process. For example, to assist FTA in prioritizing projects to ensure that the relatively few full funding grant agreements go to the most important projects, we recommended in March 2000 that FTA further prioritize the projects that it rates as highly recommended or recommended and ready for New Starts funds. FTA has not implemented this recommendation. We believe that this recommendation is still valid because the funding requested for the many projects that are expected to compete for grant agreements over the next several years is likely to exceed the available federal dollars. A further concern about the ratings process stems from FTA’s decision during the fiscal year 2004 cycle to propose a project for a full funding grant agreement that had been assigned an overall project rating of “not rated,” even though FTA’s regulations require that projects have at least a “recommended” rating to receive a grant agreement. Finally, we found that FTA needs to provide clearer information and additional guidance about certain changes it made to the evaluation and ratings process for the fiscal year 2004 cycle. In work that addressed the challenge of overseeing ongoing projects once they are selected to receive a full funding grant agreement, we reported in March and September 2000 that FTA had improved the quality of the transit grants management oversight program through strategies that included upgrading its guidance and training of staff and grantees, developing standardized oversight procedures, and employing contractor staff to strengthen its oversight of grantees. FTA also expanded its oversight efforts to include a formal and rigorous assessment of a grantee’s financial capacity to build and operate a new project and of the financial impact of that project on the existing transit system. These assessments, performed by independent accounting firms, are completed before FTA commits funds for construction and are updated as needed until projects are completed. For projects that already have grant agreements, FTA focuses on the grantee’s ability to finish the project on time and within the budget established by the grant agreement. The administration’s fiscal year 2004 budget proposal contains three New Starts initiatives—reducing the maximum federal statutory share to 50 percent, allowing non-fixed-guideway projects to be funded through New Starts, and replacing the “exempt” classification with a streamlined ratings process for projects requesting less than $75 million in New Starts funding. These proposed initiatives have advantages and disadvantages, with implications for the cost-effectiveness and performance of proposed projects. First, the reduced federal funding would require local communities to increase their funding share, creating more incentive for them to propose the most cost-effective projects; however, localities might have difficulties generating the increased funding share, and this initiative could result in funding inequities for transit projects when compared with highway projects. Second, allowing non-fixed guideway projects to be funded under New Starts would give local communities more flexibility in choosing among transit modes and might promote the use of bus rapid transit, whose costs compare favorably with those of light rail systems; however, this initiative would change the original fixed guideway emphasis of New Starts, which some project sponsors we interviewed believe might disadvantage traditional New Starts projects. Finally, replacing the “exempt” classification with a streamlined rating process for all projects requesting less than $75 million might promote greater performance-oriented evaluation since all projects would receive a rating. However, this initiative might reduce the number of smaller communities that would participate in the New Starts program. The Congress established the Essential Air Service (EAS) program as part of the Airline Deregulation Act of 1978. The act guaranteed that communities served by air carriers before deregulation would continue to receive a certain level of scheduled air service. Special provisions guaranteed service to Alaskan communities. In general, the act guaranteed continued service by authorizing DOT to require carriers to continue providing service at these communities. If an air carrier could not continue that service without incurring a loss, DOT could then use EAS funds to award that carrier a subsidy. Subsidies are to cover the difference between a carrier’s projected revenues and expenses and to provide a minimum amount of profit. Under the Airline Deregulation Act, the EAS program was intended to sunset, or end, after 10 years. In 1987, the Congress extended the program for another 10 years, and in 1998, it eliminated the sunset provision, thereby permanently authorizing EAS. To be eligible for subsidized service, a community must meet three general requirements. It must have received scheduled commercial passenger service as of October 1978, may be no closer than 70 highway miles to a medium- or large-hub airport, and must require a subsidy of less than $200 per person (unless the community is more than 210 highway miles from the nearest medium- or large-hub airport, in which case no average per- passenger dollar limit applies). Funding for the EAS program comes from a combination of permanent and annual appropriations. Part of its funding comes from the Federal Aviation Reauthorization Act of 1996 (P.L. 104-264), which authorized the collection of user fees for services provided by the Federal Aviation Administration (FAA) to aircraft that neither take off nor land in the United States, commonly known as overflight fees. The act also permanently appropriated the first $50 million of such fees for EAS and safety projects at rural airports. In fiscal year 2003, total EAS program appropriations were $113 million. As the airline industry has evolved since the industry was deregulated in 1978, the EAS program has faced increasing challenges to remain viable. Since fiscal year 1995, the program’s costs have tripled, rising from $37 million to $113 million, and they are likely to continue escalating. Several factors are likely to affect future subsidy requirements. First, carriers’ operating costs have increased over time, in part because of the costs associated with meeting federal safety regulations for small aircraft beginning in 1996. Second, carriers’ revenues have been limited because many individuals traveling to or from EAS-subsidized communities choose not to fly from the local airport, but rather to use other larger nearby airports, which generally offer more service at lower airfares. On average, in 2000, each EAS flight operated with just over 3 passengers. Finally, the number of communities eligible for EAS subsidies has increased over time, rising from a total of 106 in 1995 to 114 in July 2002 (79 in the continental United States and 35 in Alaska, Hawaii, and Puerto Rico) and again to 133 in April 2003 (96 in the continental United States and 37 in Alaska, Hawaii, and Puerto Rico). The number of subsidy-eligible communities may continue to grow in the near term. Figure 4 shows the increase in the number of communities eligible for EAS-subsidized service between 1995 and April 2003. Over the past year, the Congress, the administration, and we have each identified a number of potential strategies generally aimed at enhancing the EAS program’s long-term sustainability. These strategies broadly address challenges related to the carriers’ cost of providing service and the passenger traffic and revenue that carriers can hope to accrue. In August 2002, in response to a congressional mandate, we identified and evaluated four major categories of options to enhance the long-term viability of the EAS program. In no particular order, the options we identified were as follows: Better match capacity with community use by increasing the use of smaller (i.e., less costly) aircraft and restricting little-used flight frequencies. Target subsidized service to more remote communities (i.e., those where passengers are less likely to drive to another airport) by changing eligibility criteria. Consolidate service to multiple communities into regional airports. Change the form of the federal assistance from carrier subsidies to local grants that would allow local communities to match their transportation needs with individually tailored transportation options. Each of these options could have positive and negative effects, such as lowering the program’s costs but possibly adversely affecting the economies of the communities that would lose some or all of their direct scheduled airline service. This year’s House-passed version of the FAA reauthorization bill, H.R. 2115, also includes various options to restructure air service to small communities now served by the EAS program. The bill proposes an alternative program (the “community and regional choice program”), which would allow communities to opt out of the EAS program and receive a grant that they could use to establish and pay for their own service, whether scheduled air service, air taxi service, surface transportation, or another alternative. The complementary Senate FAA reauthorization bill (also H.R. 2115) also includes specific provisions designed to restructure the EAS program. This bill would set aside some funds for air service marketing to try to attract passengers and create a grant program under which up to 10 individual communities or a consortium of communities could opt out of the existing EAS program and try alternative approaches to improving air service. In addition, the bill would preclude DOT from terminating, before the end of 2004, a community’s eligibility for an EAS subsidy because of decreased passenger ridership and revenue. The administration’s proposal would generally restrict appropriations to the $50 million from overflight fees and would require communities to help pay the costs of funding their service. The proposal would also allow communities to fund transportation options other than scheduled air service, such as on-demand “air taxis” or ground transportation. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Committee may have. For future contacts regarding this testimony, please contact JayEtta Hecker at (202) 512-2834. Individuals making key contributions to this testimony included Robert Ciszewski, Steven Cohen, Elizabeth Eisenstadt, Rita Grieco, Steven Martin, Katherine Siggerud, Glen Trochelman, and Alwynne Wilbur. Federal-Aid Highways: Cost and Oversight of Major Highway and Bridge Projects—Issues and Options. GAO-03-764T. Washington, D.C.: May 8, 2003. Transportation Infrastructure Cost and Oversight Issues on Major Highway and Bridge Projects. GAO-02-673. Washington, D.C.: May 1, 2002. Surface Infrastructure: Costs, Financing, and Schedules for Large-Dollar Transportation Projects. GAO/RCED-98-64. Washington, D.C.: February 12, 1998. DOT’s Budget: Management and Performance Issues Facing the Department in Fiscal Year 1999. GAO/T-RCED/AIMD-98-76. Washington, D.C.: February 12, 1998. Transportation Infrastructure: Managing the Costs of Large-Dollar Highway Projects. GAO/RCED-97-27. Washington, D.C.: February 27, 1997. Transportation Infrastructure: Progress on and Challenges to Central Artery/Tunnel Project’s Costs and Financing. GAO/RCED-97-170. Washington, D.C.: July 17, 1997. Transportation Infrastructure: Central Artery/Tunnel Project Faces Financial Uncertainties. GAO/RCED-96-1313. Washington, D.C.: May 10, 1996. Central Artery/Tunnel Project. GAO/RCED-95-213R. Washington, D.C.: June 2, 1995. Highway Safety: Research Continues on a Variety of Factors That Contribute to Motor Vehicle Crashes. GAO-03-436. Washington, D.C.: March 31, 2003. Highway Safety: Better Guidance Could Improve Oversight of State Highway Safety Programs. GAO-03-474. Washington, D.C.: April 21, 2003. Highway Safety: Factors Contributing to Traffic Crashes and NHTSA’s Efforts to Address Them. GAO-03-730T. Washington, D.C.: May 22, 2003. Federal Transit Administration: Bus Rapid Transit Offers Communities a Flexible Mass Transit Option. GAO-03-729T. Washington, D.C.: June 24, 2003. Mass Transit: FTA Needs to Provide Clear Information and Additional Guidance on the New Starts Ratings Process. GAO-03-701. Washington, D.C.: June 23, 2003. Mass Transit: FTA’s New Starts Commitments for Fiscal Year 2003. GAO-02-603. Washington, D.C.: April 30, 2002. Mass Transit: FTA Could Relieve New Starts Program Funding Constraints. GAO-01-987. Washington, D.C.: August 15, 2001. Mass Transit: Project Management Oversight Benefits and Future Funding Requirements. GAO/RCED-99-240. Washington, D.C.: August 19, 1999. Mass Transit: Implementation of FTA’s New Starts Evaluation Process and FY 2001 Funding Proposals. GAO/RCED-00-149. Washington, D.C.: April 28, 2000. Mass Transit: Challenges in Evaluating, Overseeing, and Funding Major Transit Projects. GAO/T-RCED-00-104. Washington, DC: Mar. 8, 2000. Mass Transit: Status of New Starts Transit Projects With Full Funding Grant Agreements, GAO/RCED-99-240. Washington, D.C.: Aug. 19, 1999. Mass Transit: FTA’s Progress in Developing and Implementing a New Starts Evaluation Process. GAO/RCED-99-113. Washington, D.C.: April 26, 1999. Commercial Aviation: Issues Regarding Federal Assistance for Enhancing Air Service to Small Communities. GAO-03-540T. Washington, D.C.: March 11, 2003. Commercial Aviation: Factors Affecting Efforts to Improve Air Service at Small Community Airports. GAO-03-330. Washington, D.C.: January 17, 2003. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Options to Enhance the Long-term Viability of the Essential Air Service Program. GAO-02-997R. Washington, D.C.: Aug. 30, 2002. Commercial Aviation: Air Service Trends at Small Communities Since October 2000. GAO-02-432. Washington, D.C.: August 30, 2002. Essential Air Service: Changes in Passenger Traffic, Subsidy Levels, and Air Carrier Costs. T-RCED-00-185. Washington, D.C.: May 25, 2000. Essential Air Service: Changes in Subsidy Levels, Air Carrier Costs, and Passenger Traffic. RCED-00-34. Washington, D.C.: April 14, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | It is important to ensure that longterm spending on transportation programs meets the goals of increasing mobility and improving transportation safety. In this testimony, GAO discusses what recently completed work on four transportation programs suggests about challenges and strategies for improving the oversight and use of taxpayer funds. These four programs are (1) the federal-aid highway program, administered by the Federal Highway Administration (FHWA); (2) highway safety programs, administered by the National Highway Traffic Safety Administration (NHTSA); (3) the New Starts program, administered by the Federal Transit Administration (FTA); and (4) the Essential Air Service (EAS) program, administered out of the Office of the Secretary of Transportation. Differences in the structure of these programs have contributed to the challenges they illustrate. The federal-aid highway program uses formulas to apportion funds to the states, the highway safety programs use formulas and grants, the New Starts program uses competitive grants, and the EAS program provides subsidies. For each program, GAO describes in general how the program illustrates a particular challenge in managing or overseeing long-term spending and in particular what challenges and strategies for addressing the challenges GAO and others have identified. The federal-aid highway program illustrates the challenge of ensuring that federal funds (nearly $30 billion annually) are spent efficiently when projects are managed by the states. GAO has raised concerns about cost growth on and FHWA's oversight of major highway and bridge projects. Recent proposals to strengthen FHWA's oversight are responsive to issues and options GAO has raised. Options identified in previous GAO work provide the Congress with opportunities to build on recent proposals by, among other things, clarifying uncertainties about FHWA's role and authority. NHTSA's highway safety programs illustrate the challenge of evaluating how well federally funded state programs are meeting their goals. Over 5 years, the Congress provided about $2 billion to the states for programs to reduce traffic fatalities, which numbered over 42,000 in 2002. GAO found that NHTSA was making limited use of oversight tools that could help states better implement their programs and recommended strategies for improving the tools' use that NHTSA has begun to implement. The administration recently proposed performance-based grants in this area. FTA's New Starts program illustrates the challenge of developing effective processes for evaluating grant proposals. Under the New Starts program, which provided about $10 billion in mass transit funding in the past 6 years, local transit agencies compete for project funds through grant proposals. FTA has developed a systematic process for evaluating these proposals. GAO believes that FTA has made substantial progress by implementing this process, but our work has raised some concerns, including the extent to which the process is able to adequately prioritize the projects. The Essential Air Service (EAS) program illustrates the challenge of considering modifications to statutorily defined programs in response to changing conditions. Under the EAS program, many small communities are guaranteed to continue receiving air service through subsidies to carriers. However, the program has faced increasing costs and decreasing average passenger levels. The Congress, the administration, and GAO have all proposed strategies to improve the program's efficiency by better targeting available resources and offering alternatives for sustainable services. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal government is projected to invest more than $89 billion on IT in fiscal year 2017. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example: The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after investing an estimated $127 million over 9 years. The tri-agency National Polar-orbiting Operational Environmental Satellite System was disbanded in February 2010 at the direction of the White House’s Office of Science and Technology Policy after the program invested 16 years and almost $5 billion. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department invested more than $1 billion to the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after investing approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after investing more than a billion dollars and failing to deploy within 5 years of initially obligating funds. The Farm Service Agency’s Modernize and Innovate the Delivery of Agricultural Systems program, which was to replace aging hardware and software applications that process benefits to farmers, was halted in July 2014 after investing about 10 years and at least $423 million, while only delivering about 20 percent of the functionality that was originally planned. Our past work found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we reported that some CIOs’ authority was limited in that not all CIOs had the authority to review and approve the entire agency IT portfolio. Our past work has also identified nine critical factors underlying successful major acquisitions that support the objective of improving the management of large-scale IT acquisitions across the federal government: (1) program officials actively engaging with stakeholders; (2) program staff having the necessary knowledge and skills; (3) senior department and agency executives supporting the programs; (4) end users and stakeholders being involved in the development of requirements; (5) end users participating in the testing of system functionality prior to end user acceptance testing; (6) government and contractor staff being stable and consistent; (7) program staff prioritizing requirements; (8) program officials maintaining regular communication with the prime contractor; and (9) programs receiving sufficient funding. Recognizing the importance of issues related to government-wide management of IT, FITARA was enacted in December 2014. The law was aimed at improving agencies’ acquisitions of IT and could help enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to the acquisition of IT, such as Agency CIO authority enhancements. CIOs at covered agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Enhanced transparency and improved risk management. OMB and covered agencies are to make detailed information on federal IT investments publicly available and agency CIOs are to categorize their IT investments by level of risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user. Maximizing the benefit of the federal strategic sourcing initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the federal strategic sourcing initiative. OMB is also required to issue related regulations. In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving successful IT acquisitions and operations outcomes. Further, our February 2015 high-risk report also stated that, beyond implementing FITARA, OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, between fiscal years 2010 and 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations, including many to improve the implementation of the recent initiatives and other government-wide, cross-cutting efforts. We noted that OMB and agencies should demonstrate government-wide progress in the management of IT investments by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations within 4 years. In February 2017, we issued an update to our high-risk series and reported that, while progress had been made in improving the management of IT acquisitions and operations, significant work still remained to be completed. For example, as of December 2016, OMB and the agencies had fully implemented 366 (or about 46 percent) of the 803 recommendations. This was a 23 percent increase compared to the percentage we reported as being fully implemented in 2015. Figure 1 summarizes the progress that OMB and the agencies have made in addressing our recommendations, as compared to the 80 percent target. In addition, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. In addition to addressing our prior recommendations, our 2017 high-risk update also notes the importance of OMB and federal agencies continuing to expeditiously implement the requirements of FITARA. Given the magnitude of the federal government’s annual IT budget, which is projected to be more than $89 billion in fiscal year 2017, it is important that agencies leverage all available opportunities to ensure that IT investments are made in the most effective manner possible. To do so, agencies can rely on key IT workforce planning activities to facilitate the success of major acquisitions. OMB has also established several initiatives to improve the acquisition of IT, including reviews of troubled IT projects, a key transparency website, and an emphasis on incremental development. However, the implementation of these efforts has been inconsistent and more work remains to demonstrate progress in achieving successful IT acquisition outcomes. An area where agencies can improve their ability to acquire IT is workforce planning. In November 2016, we reported that IT workforce planning activities, when effectively implemented, can facilitate the success of major acquisitions. As stated earlier, ensuring program staff have the necessary knowledge and skills is a factor commonly identified as critical to the success of major investments. If agencies are to ensure that this critical success factor has been met, then IT skill gaps need to be adequately assessed and addressed through a workforce planning process. In this regard, we reported that four workforce planning steps and eight key activities can assist agencies in assessing and addressing IT knowledge and skill gaps. Specifically, these four steps are: (1) setting the strategic direction for IT workforce planning, (2) analyzing the workforce to identify skill gaps, (3) developing and implementing strategies to address IT skill gaps, and (4) monitoring and reporting progress in addressing skill gaps. Each of the four steps is supported by key activities (as summarized in table 1). However, in our November 2016 report, we determined that five agencies that we selected for in-depth analysis had not fully implemented key workforce planning steps and activities. For example, four of these agencies had not demonstrated an established IT workforce planning process. In addition, none of these agencies had fully assessed their workforce competencies and staffing needs regularly or established strategies and plans to address gaps in these areas. Figure 2 illustrates the extent to which the five selected agencies had fully, partially, or not implemented key IT workforce planning activities. The weaknesses identified were due, in part, to these agencies lacking comprehensive policies that required such activities, or failing to apply the policies to IT workforce planning. We concluded that, until these weaknesses are addressed, the five agencies risk not adequately assessing and addressing gaps in knowledge and skills that are critical to the success of major acquisitions. Accordingly, we made recommendations to each of the five selected agencies to address the weaknesses in their IT workforce planning practices that we identified. Four agencies—the Departments of Commerce, Health and Human Services, Transportation, and Treasury—agreed with our recommendations and one, the Department of Defense, partially agreed. In January 2010, the Federal CIO began leading TechStat sessions— face-to-face meetings to terminate or turn around IT investments that are failing or are not producing results. These meetings involve OMB and agency leadership and are intended to increase accountability and transparency and improve performance. OMB reported that federal agencies achieved over $3 billion in cost savings or avoidances as a result of these sessions in 2010. Subsequently, OMB empowered agency CIOs to hold their own TechStat sessions within their respective agencies. In June 2013, we reported that, while OMB and selected agencies continued to hold additional TechStats, more OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects. Specifically, OMB reported conducting TechStats at 23 federal agencies covering 55 investments, 30 of which were considered medium or high risk at the time of the TechStat. However, these reviews accounted for less than 20 percent of medium- or high-risk investments government-wide. As of August 2012, there were 162 such at-risk investments across the government. Further, we reviewed four selected agencies and found they had held TechStats on 28 investments. While these reviews were generally conducted in accordance with OMB guidance, we found that areas for improvement existed. For example, these agencies did not consistently create memorandums with responsible parties and due dates for action items. We concluded that, until these agencies fully implemented OMB’s TechStat guidance, they may not be positioned to effectively manage and resolve problems on IT investments. In addition, we noted that, until OMB and agencies develop plans and schedules to review medium- and high- risk investments, the investments would likely remain at risk. Among other things, we recommended that OMB require agencies to conduct TechStats for each IT investment rated with a moderately high- or high- risk rating, unless there is a clear reason for not doing so. OMB generally agreed with this recommendation. However, when we testified on this issue slightly more than 2 years later in November 2015, we found that OMB had only conducted one TechStat review between March 2013 and October 2015. In addition, we noted that OMB had not listed any savings from TechStats in any of its required quarterly reporting to Congress since June 2012. This issue continues to be a concern and, in January 2017, the Federal CIO Council issued a report titled the State of Federal Information Technology, which noted that while early TechStats saved money and turned around underperforming investments it was unclear if OMB had performed any TechStats in recent years. To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIOs to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the IT Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, as well as issues with the accuracy and reliability of data. In total, we have made 47 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the IT Dashboard and to increase its availability. Most agencies have agreed with our recommendations. Most recently, in June 2016, we determined that 13 of the 15 agencies selected for in-depth review had not fully considered risks when rating their major investments on the IT Dashboard. Specifically, our assessments of risk for 95 investments at 15 selected agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 3 summarizes how our assessments compared to the selected investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and the CIO ratings: Forty of the 95 CIO ratings were not updated during the month we reviewed, which led to more differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes spanned longer than 1 month. Longer processes mean that CIO ratings are based on older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do not incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the 15 agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we recommended that the 15 agencies take actions to improve the quality and frequency of their CIO ratings. Twelve agencies generally agreed with or did not comment on the recommendations and three agencies disagreed, stating their CIO ratings were adequate. However, we noted that weaknesses in their processes still existed and that we continued to believe our recommendations were appropriate. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIOs certify that IT investments are adequately implementing OMB’s incremental development guidance. In May 2014, we reported that 66 of 89 selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half of these investments planned to deliver functionality in 12-month cycles. We also reported that only one of the five agencies had complete incremental development policies. Accordingly, we recommended that OMB develop and issue clearer guidance on incremental development and that the selected agencies update and implement their associated policies. Four of the six agencies agreed with our recommendations or had no comments; the remaining two agencies partially agreed or disagreed with the recommendations. The agency that disagreed with our recommendation stated that it did not believe that its recommendation should be dependent on OMB first taking action. However, we noted that our recommendation does not require OMB to take action first and that we continued to believe our recommendation was warranted and could be implemented. Subsequently, in August 2016, we reported that agencies had not fully implemented incremental development practices for their software development projects. Specifically, we noted that, as of August 31, 2015, 22 federal agencies had reported on the IT Dashboard that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. Regarding the remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal. These included project complexity, the lack of an established project release schedule, or that the project was not a software development project. Table 2 lists the total number and percent of federal software development projects for which agencies reported plans to deliver functionality every 6 months for fiscal year 2016. In conducting an in-depth review of seven selected agencies’ software development projects, we determined that 45 percent of the projects delivered functionality every 6 months for fiscal year 2015 and 55 percent planned to do so in fiscal year 2016. Agency officials reported that management and organizational challenges and project complexity and uniqueness had impacted their ability to deliver incrementally. We concluded that it was critical that agencies continue to improve their use of incremental development to deliver functionality and reduce the risk that these projects will not meet cost, schedule, and performance goals. In addition, while OMB had issued guidance requiring covered agency CIOs to certify that each major IT investment’s plan for the current year adequately implements incremental development, only three agencies (the Departments of Commerce, Homeland Security, and Transportation) had defined processes and policies intended to ensure that the department CIO certifies that major IT investments are adequately implementing incremental development. Officials from three other agencies (the Departments of Education, Health and Human Services, and the Treasury) reported that they were in the process of updating their existing incremental development policy to address certification, while the Department of Defense’s policies that address incremental development did not include information on CIO certification. We concluded that until all of the agencies we reviewed define processes and policies for the certification of the adequate use of incremental development, they will not be able to fully ensure adequate implementation of, or benefit from, incremental development practices. Accordingly, we recommended that four agencies establish a policy and process for the certification of major IT investments’ adequate use of incremental development. The Departments of Education and Health and Human Services agreed with our recommendation, while the Department of Defense disagreed and stated that its existing policies address the use of incremental development. However, we noted that the department’s policies did not comply with OMB’s guidance and that we continued to believe our recommendation was appropriate. The Department of the Treasury did not comment on the recommendation. In conclusion, with the enactment of FITARA, the federal government has an opportunity to improve the transparency and management of IT acquisitions, and to strengthen the authority of CIOs to provide needed direction and oversight. In addition to implementing FITARA, applying key IT workforce planning practices could improve the agencies’ ability to assess and address gaps in knowledge and skills that are critical to the success of major acquisitions. Further, continuing to implement key OMB initiatives can help to improve the acquisition of IT. For example, conducting additional TechStat reviews can help focus management attention on troubled projects and provide a mechanism to establish clear action items to improve project performance or terminate the investment. Additionally, improving the assessment of risks when agencies rate major investments on the IT Dashboard would likely provide greater transparency and oversight of the government’s billions of dollars in IT investments. Lastly, increasing the use of incremental development approaches could improve the likelihood that major IT investments meet cost, schedule, and performance goals. Chairmen Hurd and Meadows, Ranking Members Kelly and Connolly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at [email protected]. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Chris Businsky, Rebecca Eyler, and Jon Ticehurst (Analyst in Charge). High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps. GAO-17-8. Washington, D.C.: November 30, 2016. Information Technology Reform: Agencies Need to Increase Their Use of Incremental Development Practices. GAO-16-469. Washington, D.C.: August 16, 2016. IT Dashboard: Agencies Need to Fully Consider Risks When Rating Their Major Investments. GAO-16-494. Washington, D.C.: June 2, 2016. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Information Technology: Agencies Need to Establish and Implement Incremental Development Policies. GAO-14-361. Washington, D.C.: May 1, 2014. IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available. GAO-14-64. Washington, D.C.: December 12, 2013. Information Technology: Additional Executive Review Sessions Needed to Address Troubled Projects. GAO-13-524. Washington, D.C.: June 13, 2013. IT Dashboard: Opportunities Exist to Improve Transparency and Oversight of Investment Risk at Select Agencies. GAO-13-98. Washington, D.C.: October 16, 2012. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way to Better Inform Decision Making. GAO-12-210. Washington, D.C.: November 7, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The federal government is projected to invest more than $89 billion on IT in fiscal year 2017. Historically, these investments have frequently failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, IT reform legislation was enacted, aimed at improving agencies' acquisitions of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list. This statement focuses on the status of federal efforts in improving the acquisition of IT. Specifically, this statement summarizes GAO's prior work primarily published between June 2013 and February 2017 on (1) key IT workforce planning activities, (2) risk levels of major investments as reported on OMB's IT Dashboard, and (3) implementation of incremental development practices, among other issues. The Federal Information Technology Acquisition Reform Act (FITARA) was enacted in December 2014 to improve federal information technology (IT) acquisitions and can help federal agencies reduce duplication and achieve cost savings. Successful implementation of FITARA will require the Office of Management and Budget (OMB) and federal agencies to take action in a number of areas identified in the law and as previously recommended by GAO. IT workforce planning. GAO identified eight key IT workforce planning practices in November 2016 that are critical to ensuring that agencies have the knowledge and skills to successfully acquire IT, such as analyzing the workforce to identify gaps in competencies and staffing. However, GAO reported that the five selected federal agencies it reviewed had not fully implemented these practices. For example, none of these agencies had fully assessed their competency and staffing needs regularly or established strategies and plans to address gaps in these areas. These weaknesses were due, in part, to agencies lacking comprehensive policies that required these practices. Accordingly, GAO made specific recommendations to the five agencies to address the practices that were not fully implemented. Four agencies agreed and one partially agreed with GAO's recommendations. IT Dashboard. To facilitate transparency into the government's acquisition of IT, OMB's IT Dashboard provides detailed information on major investments at federal agencies, including ratings from Chief Information Officers (CIO) that should reflect the level of risk facing an investment. GAO reported in June 2016 that 13 of the 15 agencies selected for in-depth review had not fully considered risks when rating their investments on the IT Dashboard. In particular, of the 95 investments reviewed, GAO's assessments of risks matched the CIO ratings 22 times, showed more risk 60 times, and showed less risk 13 times. Several factors contributed to these differences, such as CIO ratings not being updated frequently and using outdated risk data. GAO recommended that agencies improve the quality and frequency of their ratings. Most agencies agreed with GAO's recommendations. Incremental development. An additional reform initiated by OMB has emphasized the need for federal agencies to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Specifically, since 2012, OMB has required investments to deliver functionality every 6 months. In August 2016, GAO determined that, for fiscal year 2016, 22 agencies had reported on the IT Dashboard that 64 percent of their software development projects would deliver useable functionality every 6 months. However, GAO determined that only three of seven agencies selected for in-depth review had policies regarding the CIO certifying IT investments' adequate implementation of incremental development, as required by OMB. GAO recommended, among other things, that four agencies improve their policies for CIO certification of incremental development. Most of these agencies agreed with the recommendations. Between fiscal years 2010 and 2015, GAO made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. The significance of these recommendations contributed to the addition of this area to GAO's high-risk list. As of December 2016, OMB and the agencies had fully implemented 366 (or about 46 percent) of the 803 recommendations. In fiscal year 2016, GAO made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings GAO has identified. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Agencies use vehicles in many ways, as vehicles support agency efforts to achieve various mission needs. These needs can be diverse, as demonstrated by the vehicle uses of the five agencies we selected for review: ferrying clients, conveying repair equipment, hauling explosive materials, and transporting employees, among others (see table 1.) Agencies may own or lease the vehicles in their fleets and are responsible for managing their vehicles’ utilization in a manner that allows them to fulfill their missions and meet various federal requirements. For example, agencies determine the number and type of vehicles they need to own or lease and when a vehicle is no longer needed to achieve the agency’s mission. Statutes, executive orders, and policy initiatives direct federal agencies to, among other things, collect and analyze data on costs and eliminate non-essential vehicles from their fleets. For example, every year agencies provide an update on their progress in achieving the inventory goals determined by their Vehicle Allocation Methodology (VAM), such as the type and number of vehicles in their fleets. These updates are reviewed by GSA’s Office of Government-wide Policy (OGP), which provides feedback on agencies’ submissions. Federal provisions on vehicle justifications and determining what makes a vehicle “utilized” are detailed in the Federal Property Management Regulations (FPMR). Specifically, the FPMR provide how agencies can define utilization criteria for the vehicles that they use. According to GSA’s OGP, the only requirement in the utilization portion of the regulations is for agencies to justify every full-time vehicle in their respective fleets, though the regulations do not specify how these justifications should be conducted. The FPMR recommend—but do not require—that the annual mileage minimum for passenger vehicles be 12,000 miles, and 10,000 miles for light trucks. However, according to GSA officials, mileage is not the only appropriate indicator of utilization for some vehicles’ missions. For example, GSA officials stated that it would be inappropriate to set a mileage expectation for an emergency responder vehicle or a vehicle that supports national security requirements because those vehicles are only needed in specific circumstances and may not accrue many miles. Thus, the FPMR state that the aforementioned mileage guidelines “may be employed by an agency… other utilization factors, such as days used, agency mission, and the relative costs of alternatives to a full time vehicle assignment, may be considered as justification where miles traveled guidelines are not met.” Therefore, according to GSA officials, agencies are allowed to define their own utilization criteria, which may include adopting the miles-traveled guidelines from the FPMR, using mileage minimums above or below the FPMR, or employing other metrics. According to GSA officials, agencies may choose to define their selected utilization criteria in their internal policies, and vehicles meeting those criteria would be considered justified under the regulations. However, if a vehicle does not meet the utilization criteria specifically described in agency policy, the FPMR permit agencies to individually justify a vehicle using criteria the agency finds appropriate for that specific vehicle. The regulations do not specify the frequency with which the justifications (either as determined by agency policy or individually determined) must be conducted, updated, or reviewed. Agencies decide what vehicles are needed to help the agency meet their mission at any given point in time. While GSA provides guidance, the ultimate decision-making power lies with the agency leasing the vehicle. Federal agencies can use GSA Fleet to acquire leased vehicles. According to GSA, under this arrangement an agency informs GSA Fleet what kind of vehicle is necessary for its mission. GSA Fleet fulfills the agency’s request by either purchasing a new vehicle (owned by GSA but leased to the agency), or providing a vehicle from GSA’s existing inventory (owned by GSA and previously leased to another agency). GSA Fleet’s primary mission is to provide the “best value” to its customers and the American people. GSA Fleet’s leasing rates are designed to recover all costs of its leasing program, but the exact cost of a lease depends on the type of vehicle and the number of miles traveled during the lease period, among other factors. For example, a conventionally fueled subcompact sedan has a 2015 fixed rate of $153 per month and mileage rate of $0.13 per mile traveled. GSA Fleet’s fixed rate is designed to cover fixed costs such as GSA Fleet staff and vehicle depreciation, whereas the mileage rate is designed to cover variable costs such as fuel and maintenance. Agencies are responsible for any costs associated with damage or excessive wear and tear over the course of the lease— typically 3-7 years for a passenger vehicle. We previously reported that, according to GSA officials and fleet managers from military and civilian fleets, GSA Fleet’s vehicle lease rates are typically lower than the commercial sector and provide a more economical choice for federal agencies. GSA Fleet collects data on leased vehicles to assist with billing as well as help agencies manage their leased-vehicle fleets. GSA Fleet’s Fleet Management System (FMS) contains most of this data. The portal used by agencies to access the data in GSA’s FMS is called Drive-thru. Drive- thru offers a suite of applications, including tools to analyze crash data and report mileage. As Drive-thru is the primary portal through which customers can access GSA’s leasing data, some customers refer to the underlying database as Drive-thru as well. While Drive-thru is the name of the exterior-facing access portal rather than the database itself, we will refer to the database as Drive-thru for the purposes of this report to reflect the language commonly used by GSA’s leasing customers. Drive-thru stores hundreds of data elements on each vehicle, including manufacturer-provided information such as make, model, and fuel efficiency; agency-reported data such as monthly mileage; and data obtained through fleet cards (charge cards) such as quantity and type of fuel purchased. Agencies can import information from Drive-thru into their own internal fleet management systems and, according to multiple agency officials, generally rely on GSA Fleet to ensure Drive-thru’s accuracy, as identifying and correcting erroneous data can be time consuming and difficult. However, agencies can change the data they receive from Drive-thru after data enter an agency’s internal fleet management system but before they are externally reported. GSA’s OGP co-manages and co-funds a web-based reporting tool—the Federal Automotive Statistical Tool (FAST)—with the Department of Energy (DOE). FAST gathers data from federal agencies about their owned and leased vehicles to satisfy a variety of federal-reporting requirements, including the annual Federal Fleet Report. According to the Office of Management and Budget (OMB), it is the leasing agencies, not GSA or DOE, which are responsible for the accuracy of the data agencies report to FAST. As a result, while GSA’s OGP helps compile the information from FAST that populates the Federal Fleet Report, the accuracy of the Federal Fleet Report is dependent on the accuracy of the data that agencies report to FAST. The Federal Fleet Report provides an overview of federal motor vehicle data, such as number of vehicles and related costs. A comparison of the reports from fiscal years 2012 through 2014 shows that the overall quantity of leased vehicles varies slightly from year to year, but the costs have consistently decreased. For example, in fiscal year 2013, federal agencies leased 183,989 vehicles at a cost of approximately $1.06 billion. In fiscal year 2014, federal agencies leased slightly more vehicles— 186,214—but the costs dropped to $1.03 billion, as shown in table 2. GSA officials explained that the cost reduction is attributable in part to agencies’ decisions to lease smaller, less expensive vehicles. Although GSA collects and reports information on leased vehicles, GSA does not have responsibility for tracking how agencies use vehicles or identifying underutilized vehicles. Nevertheless, some of the services that GSA Fleet provides are related to utilization. For example, to help streamline customers’ vehicle leasing experiences, in 2014 GSA employed approximately 330 liaisons called Fleet Service Representatives (FSR). FSRs are expected to answer local customers’ questions about vehicle acquisition, provide assistance when vehicles need services, and help customers understand the various leasing terms and products offered by GSA Fleet. According to GSA Fleet, FSRs should discuss utilization with leasing customers at least annually as part of other business discussions. We found the data we reviewed in Drive-thru to be generally reliable as GSA has taken steps to ensure that the data are reasonable, although a few data elements have indications that those data could be more accurate. While GSA is not responsible for the accuracy of data in FAST, it has taken appropriate steps to ensure the data are reasonable. GSA is responsible for ensuring that the information that it is providing to customers in Drive-thru is reliable (i.e., both reasonable and accurate). It is important that data in Drive-thru are reliable because reports that are generated via Drive-thru represent a service that GSA is directly providing to customers to help them manage their fleets. Agencies also use Drive- thru when fulfilling federal fleet reporting requirements. For example, agencies can download a report about their leased vehicles from Drive- thru. The report then can be directly uploaded into FAST to meet annual reporting requirements on the leased fleet’s size and costs. Incorrect data in Drive-thru can therefore hinder agencies’ abilities to manage their leased fleets or could compromise the integrity of federal reports. A basic test of reliability is whether the data are reasonable. Using the guidance provided in three key sources, we developed an analytical framework for measuring the “reasonableness” of data, as there is currently no universally accepted standard for such a measurement. Each of these key sources discusses three topics, which we use as our standard for reasonableness of data: (1) electronic safeguards, such as error messages for out-of-range or inconsistent entries; (2) a review of data samples to ensure that key fields are non-duplicative and sensible; and (3) clear guidance to ensure consistent user interpretation of data entry rules. Based on the data we reviewed, we found that GSA has taken appropriate steps to ensure the selected Drive-thru data are reasonable. Specifically, GSA uses electronic safeguards when data are entered into Drive-thru. For example, error messages appear if a user enters an odometer reading such as 12345, 99999, 00000, 654321 or a reading that differs 9,999 or more miles from the previous month’s entry. Similarly, GSA uses a validation program to catch vehicle identification number (VIN) entry errors. VIN barcodes are scanned into GSA’s system unless they must be manually entered due to barcode damage. For both scans and manual entries, software validates that the entered VIN meets the check digit calculation. In addition, GSA verifies some data during reconciliations and other post- entry checks. For example, customer mileage entries are routinely monitored by GSA’s Loss Prevention Team (LPT) for abnormal inputs. If entries for a specific vehicle are consistently nonsensical, the LPT reviews the activity for signs of fraud and, if likely fraudulent, forwards to the appropriate Inspector General’s office for investigation. For entries that are consistently nonsensical but are not likely fraudulent, the LPT notifies the designated FSR for follow-up with the customer. The FSR is then tasked with emphasizing to the customer the importance of entering valid odometer readings in the future. Lastly, GSA reported that it provides guidance on how to enter vehicle- related information into Drive-thru to the people who are responsible for entering different types of data. Generally, information about the vehicle itself is the responsibility of GSA or its agents (such as contractors— known as “marshallers”—who enter manufacturer-provided data at the time GSA receives the vehicle). GSA provides a handbook to marshallers that explains how the marshallers should use the software that collects information and transmits it to GSA’s system. Similarly, GSA provides a Drive-thru guide to customers that explains how customers should enter certain types of information into Drive-thru; however, GSA does not provide instructions regarding how customers should inform GSA if their contact information will change. The lack of such guidance may have been a contributing factor in the inaccuracies we found in the customer contact data, as discussed in the next section on indications of accuracy in Drive-thru data; however, according to GSA officials, planned changes to GSA’s customer ID protocols will remove the need for such guidance in the future. A second test of data reliability is accuracy; however, we tested for indications of accuracy in the data, as verifying the data accuracy itself would have required extensive examination of individual vehicles, which was beyond the scope of this review. We performed tests on a selection of nearly two dozen Drive-thru data elements from May 2015 for selected vehicles and determined that there are numerous indications of accuracy associated with the data we reviewed. For example: Almost 100 percent of 9 vehicle inventory fields, including make, manufacturer name, fuel type, VIN, and model year, have no missing data. One vehicle was missing the manufacturer name. Three entries indicated the presence of a luxury manufacturer entry (all for Audi), an error rate of less than one hundredth of one percent. .07 percent of records for sedan fuel tank sizes exceeded 20 gallons. Although sedan fuel tank sizes vary and can change from year to year, few midsize sedans have 20 gallon tanks. Therefore, fuel tanks larger than 20 gallons might indicate a data error. Despite the overall indications that the selected Drive-thru data are accurate, there are three areas where we found indications that the data may be less accurate than the other information we studied: fuel type coding, odometer entries, and customer contact data. According to federal internal controls standards, data collection applications—including electronic safeguards such as logic and edit checks—should ensure that all inputs are correct in order to facilitate accountability and effective stewardship of government resources. First, we found that while most fuel-type-coding data appear to be accurate, gas stations coded pumps incorrectly in at least some cases from January through April 2015, and possibly in as high as 46 percent of cases. For example, drivers of vehicles with E-85 fuel types were reported to have purchased compressed natural gas or biodiesel. We were not able to determine the precise number of instances where fuel had been miscoded; however, because some vehicles use more than one type of fuel—for example, “flex fuel” vehicles can operate on either regular gasoline or an alternative fuel known as E-85, which is a blend of gasoline and ethanol. Given the data available, we were not able to determine which fuel the user actually selected and were thus unable to determine which purchases were coded incorrectly by the gas station. The high end of the error range (46 percent) would mean that each uncertainty was resolved as a fuel-pump-coding error by the gas station, an error that GSA officials said was extremely improbable. These officials noted that they believed the actual error rate was substantially lower. However, GSA officials agreed that pump miscodings compromise data accuracy and noted that GSA has worked with fueling station owners and relevant associations to reduce fuel pump miscodings. However, GSA officials stated their ability to affect change is highly limited, as the miscodings occur at the point of sale and there is no incentive for the fueling stations to correct the miscodings. In addition to fuel type miscodings, we found that 3 percent of monthly odometer entries in May 2015 were lower than the previous month’s odometer reading. An odometer reading that decreases from one month to the next indicates that there was an error at some point in time—either the previous month’s entry was too high, or the current month’s entry is too low. Monthly odometer readings are supplied by agencies as part of the billing process, and odometer errors result in temporary billing errors as agencies pay additional fees based on mileage. GSA officials stated that they cannot be certain of a vehicle’s odometer reading until the vehicle is returned to them at the end of the leasing period and that they typically depend on the leasing agency to correctly report the odometer readings. According to GSA officials, as part of the monthly odometer- data collection process GSA’s system warns users that they may have entered incorrect data if the reported odometer reading is 9,999 miles greater than or less than the previous month’s odometer reading. Users would then be able to correct the data before submitting it to GSA. GSA officials stated that they chose the 9,999 mile warning point because they did not want the system to generate cautionary messages to customers when there was a valid reason for the mileage difference. The officials explained that there are legitimate reasons why the previous month’s odometer reading might be higher than the current month’s reading. For example, if the agency relied on GSA to estimate mileage in the previous month and the estimate was too high, the agency’s correction in the current month could result in a lower odometer reading. GSA officials said that they did not want the system to incorrectly flag these instances, and that they have no plans to evaluate the current safeguard. However, using such a large mileage difference to trigger a warning means that GSA may be unlikely to catch the majority of errors. We found 52 cases where the mileage difference was 9,999 miles or greater, but more than 4,800 cases where the previous month’s odometer reading exceeded the current month’s reading. We also found that the average monthly odometer difference for our selected vehicle data is 564 miles per month, with 95 percent of vehicles driving less than 2,482 miles per month, as shown in table 3. Although the resulting billing errors can be resolved the following month and the overall error rate is low, resolutions take time and resources for both GSA and the customer. Evaluating the current warning and adjusting it accordingly could help improve the accuracy of the data and therefore help reduce these costs, and GSA officials stated that changing the existing safeguard would not be costly. Further, GSA’s edit check for odometer readings is not consistent with federal internal control standards that call for agencies to pursue data accuracy when possible and cost- effective. Lastly, we found that customer contact data, such as the name and e-mail of the individual whom GSA should contact for vehicle-related services, is not always correct. As mentioned previously, GSA’s customer-leasing guide does not provide guidance regarding how customers should proceed if the vehicle’s point of contact will change. In addition, according to GSA officials, the customer ID number—which is how customers sign in to Drive-thru—is associated with the customer’s fleet, not the customer points of contact themselves. As a result, customer contact data are updated manually by FSRs after FSRs detect a problem, such as a returned e-mail after the previous point of contact leaves the agency. Several FSRs stated that the manual updates are time-consuming. Moreover, one FSR we interviewed stated that the current process relies on the initiative of FSRs to ensure accuracy. Without accurate customer contact data, it is more difficult for FSRs to communicate with agencies about vehicles, including whether certain vehicles are still needed. Two FSRs stated that turnover in customer agency fleet management is high. Such turnover exacerbates the difficulty associated with maintaining the accuracy of these data. According to GSA officials, planned changes to Drive-thru in 2016 will resolve this issue, as customer IDs will no longer be assigned to a fleet. Rather, each customer will have that individual’s own individual user account, profile, and password. In addition, the customer ID will be the individual customer’s e-mail address instead of a number, a step that GSA officials anticipate will resolve the difficulties associated with updating the user contact information. GSA is not responsible for the accuracy of data reported to FAST, a data collection system that GSA co-manages with DOE. Rather, OMB’s Circular A-11 provides that agencies are responsible for reviewing and correcting fleet data prior to submitting them through FAST. However, GSA’s OGP has a role in ensuring the reasonableness of FAST data as a partner in the FAST management team. In this role, GSA focuses on data relevant to fleet management, such as overall inventory, cost, and utilization metrics. We found that GSA’s OGP has taken appropriate steps to ensure the fleet management data reported to FAST are reasonable. Specifically, (1) GSA is aware of the electronic safeguards built into FAST for fleet management data; (2) GSA examines some of the data after it is submitted by agencies and flags entries for correction; and (3) GSA provides guidance to agencies on how to properly enter information into FAST. According to GSA, it shares responsibility with DOE for implementing and managing electronic safeguards for FAST. GSA and DOE collaborate to implement logic checks, which both parties use to determine the reasonableness of the data. We also found that GSA has a process for reviewing data after they are entered by an agency. If, for example, a significant increase in a specific type of fuel use is not matched by a similar increase in inventory, mileage, or cost, then GSA flags the data for verification with the agency. While it is not known how often GSA finds entries that it recommends for agency review, GSA reported that during both the 2013 and 2014 FAST reporting cycles, a few agencies experienced difficulties that required GSA to help resolve data issues (for example, re-opening FAST after the close of the data call). Lastly, we found that GSA provides guidance to agencies on how to properly enter information into FAST in a variety of formats, including (1) written instructions to users, (2) written instructions to administrators, (3) presentations at quarterly meetings, (4) one-on-one sessions with individual agencies upon request, (5) online demonstrations, and (5) official guidance in the form of Federal Management Regulation (FMR) bulletins. GSA has a limited role in identifying and reducing underutilized leased vehicles, as agencies are responsible for managing their vehicle fleets. GSA is not responsible for monitoring agencies’ vehicle utilization policies. Rather, according to GSA officials, GSA focuses on providing guidance and advice to federal agencies on utilization by (1) developing written guidance and reviewing agencies’ Vehicle Allocation Methodology (VAM) update submissions and (2) holding conversations with federal agencies’ fleet managers about vehicle utilization at least annually. GSA’s OGP provides written guidance in the form of bulletins to federal agencies to implement legislation, executive orders, and other directives, but agencies are not legally required to follow this guidance. For example, in May 2011, a Presidential Memo (implementing a 2009 Executive Order) required GSA to develop and distribute VAM guidance to federal agencies for determining their optimum fleet inventory. In response, GSA provided such guidance to agencies in August 2011. Specifically, the guidance directed agencies to survey the utilization of vehicles each year, but agencies were not required to follow the guidance and some agencies chose to continue using their existing processes even though those processes differed from the GSA guidance. For example, some agencies’ fleet managers (including those from NASA, according to NASA officials, and those from the U.S. Navy, according to GSA officials) believed that the processes they already had in place fulfilled the intention of the guidance. In addition to providing written guidance, GSA has voluntarily reviewed utilization information covered in agencies’ VAM update submissions and has sometimes made broad recommendations to agencies based on those reviews. For example, in the 2014 VAM review, GSA recommended that all executive federal agencies establish and document specific vehicle utilization criteria for motor vehicle justification, that the criteria be reviewed at least annually, and that action be taken when underutilized vehicles are identified. GSA officials told us that another aspect of the agency’s role in identifying and reducing underutilized leased vehicles is to provide advice to federal agencies’ fleet managers at least annually through conversations about utilization. According to GSA officials, this advisory role is intended to help the federal government save money by providing agencies with support needed to make wise business decisions. In addition, GSA officials explained that during conversations with fleet managers, FSRs might discuss the agency’s overall fleet size, vehicle replacement options, or may suggest that a larger vehicle is no longer needed when a smaller one will suffice. For example, one NASA fleet manager told us that his FSR coordinated the exchange of two larger vehicles in his fleet for two smaller vehicles for the purposes of downsizing and reducing fuel consumption. To improve our understanding of these utilization conversations and to examine their usefulness, we sent a non-generalizable survey to 68 fleet managers for our five selected federal fleets. While the responses are not representative of either the experiences among our five selected agencies or the federal fleet as a whole, they do provide insight into activities that are otherwise undocumented. Fifty one fleet managers responded, with the majority of them (41) reporting either having decision- making authority or collaborating with their supervisor to make decisions about vehicle acquisition and disposal. Of the 41 respondents with a role in the vehicle’s acquisition and disposal decision-making process, 27 responded that their FSR has communicated with them about leased- vehicle utilization based on mileage. The majority of those decision- makers—25 of the 27—said that these communications were moderately to extremely useful in helping them to manage their leased-vehicle utilization based on mileage. However, 18 of the 51 overall respondents (including 14 of the 41 respondents with an acquisition and disposal decision-making role) said that they had never discussed utilization based on mileage with their FSR. GSA’s management told us that it believes these conversations are occurring, but may not include the word “utilization,” a situation that could explain, in part, why some of our survey respondents reported never having discussed utilization with their FSR. According to GSA officials, the expectation is inherent to the role of the FSR and is made clear to them through training. However, we found indications that not all FSRs are discussing utilization with agency fleet managers. GSA’s management does not have a mechanism to help ensure that these conversations are occurring as expected. As a result, GSA may not be able to identify opportunities for FSRs to better assist agencies in identifying and managing their underutilized leased vehicles. Establishing such a mechanism would be consistent with federal internal control standards, which state that agencies should have reasonable assurance that employees are carrying out their duties and that feedback is provided in the event that expectations are not met. While GSA generally focuses on providing guidance and advice, it has regulatory authority to repossess federal agencies’ leased vehicles in some instances, including cases where agencies cannot produce justification for the vehicle. Specifically, the FPMR state that if GSA requests justification for a vehicle, agencies must provide it. If the agency does not provide justification for that leased vehicle, GSA may withdraw the vehicle from further agency use. GSA officials told us that it does not exercise this authority because it would be a significant cost and time burden for GSA to review these justifications. Some of the agencies we reviewed could not determine if vehicles met utilization criteria, could not provide justifications for vehicles, or kept vehicles that had been determined were not needed. In total, we identified shortcomings in agency processes that affected leased vehicles with an annual cost of approximately $8.7 million. While the FPMR provide general mileage guidelines that can be used as criteria for vehicle utilization—12,000 miles per year for passenger vehicles and 10,000 miles per year for light trucks—it also authorizes agencies to develop their own criteria to determine vehicle utilization where miles-traveled guidelines are not appropriate. GSA officials stated most vehicles will not meet these guidelines and that agencies are expected to adopt criteria that reflect the vehicles’ mission. The agencies in our review used a wide variety of utilization criteria, as shown in table 4. One of the five agencies—BIA—uses the FPMR mileage guidelines as its criteria. Three other agencies—Air Force, NPS, and VHA—use the FPMR mileage guidelines for some (but not all) vehicles. NASA does not use the FPMR guidelines as criteria; NASA uses miles-traveled criteria that are lower than the FPMR guidelines. Analyzing the appropriateness of each utilization criteria was beyond the scope of this report. According to GSA officials, all utilization criteria—including mileage criteria below FPMR guidelines—are allowed under the FPMR. While three of our five selected agencies use mileage criteria below FPMR guidelines for at least some vehicles, they are not the only agencies doing so. For example, in fiscal year 2013, the Inspector General (IG) for the Department of Energy (DOE) found one DOE facility used 2,460 miles per year, an average of 205 miles per month, as its utilization criteria. Agencies provided a variety of explanations for the utilization criteria they selected: Air Force officials stated their vehicles serve very diverse mission needs. In order to ensure they have the right vehicle for each mission need, they developed a software algorithm with over 2,600 criteria that are not all utilization-based. Some criteria include the cost of alternatives and the criticality of a vehicle’s contribution to the mission. According to BIA officials, the FPMR’s miles-traveled guidelines are appropriate utilization criteria for their fleet because their vehicles typically travel long distances across remote areas to meet their mission. NPS officials stated they used the FPMR’s miles-traveled guidelines as criteria for leased vehicles because the criteria provide the right metrics to meet department needs. VHA uses the FPMR’s miles-traveled guidelines as well as other miles-traveled metrics and days per month as utilization criteria, which an official said reflects the agency mission of delivering health care. Vehicles only need to meet one criterion to be considered utilized. NASA uses miles-traveled utilization criteria that are lower than the FPMR miles-traveled guidelines. NASA policy requires each NASA center to set utilization criteria at 25 percent of the average miles traveled for each vehicle type at their center (see app. II for a list of NASA utilization measurements by center). NASA officials stated they believe this approach is an acceptable business practice, which the agency has used for more than 20 years. We found 71 percent of the vehicles we selected from the five agencies met these agency-defined criteria, as shown in table 4. For two agencies—NASA and VHA—we found that the agencies’ processes for managing utilization data did not always facilitate the identification of underutilized leased vehicles, although both agencies have taken steps to rectify the identified issues. Specifically, we found: NASA did not apply its utilization criteria to 41 vehicles at its Armstrong Flight Research Center because, according to NASA officials, the center’s transportation officer retired in 2013 and the replacement did not apply utilization criteria in fiscal year 2014. Without utilization criteria, the center could not determine which vehicles from this center were utilized in fiscal year 2014. The agency paid approximately $137,000 for these vehicles in fiscal year 2014. According to NASA officials, the center’s transportation officer conducted a utilization analysis for these vehicles in fiscal year 2015 and the center will continue to follow NASA policy in the future. VHA did not safeguard vehicle utilization data at one VHA medical center, as a new employee deleted vehicle utilization data from 2008- 2014. This prevented the agency from presently determining whether 343 vehicles had met the utilization criteria in fiscal year 2014. The agency paid more than $1.1 million to GSA in fiscal year 2014 for these vehicles. A VHA official said the agency was previously unaware vehicle utilization data from that medical center had been deleted from the Fleet Management Information System (FMIS) and have counseled the employee responsible regarding the error to ensure that the data are retained in the future. If vehicles do not meet utilization criteria defined in agency policy, the FPMR provides that agencies must justify vehicles in another manner. The FPMR do not specify how agencies should conduct these justifications or how the justifications should be documented. While the FPMR state that agencies may be required to provide written justification, the regulations do not require agencies to clearly document the justifications before a request to provide such documentation is made. Federal internal control standards state that all transactions and significant events need to be clearly documented and that the documentation should be readily available for examination. Four of the five agencies in our review could not readily provide justifications for vehicles that had not met utilization criteria defined in agency policy. Cumulatively, these agencies spent approximately $5.8 million in fiscal year 2014 on vehicles where individual justifications could not be located in a timely manner, as shown in table 5 below. Without readily available documentation, the agencies could not determine whether they had justified these vehicles, and whether any of these vehicles should be eliminated from agency fleets. Air Force officials could not readily provide the justifications for 413 vehicles that did not meet the utilization criteria in its software algorithm. The agency paid $1.5 million to GSA in fiscal year 2014 for these vehicles. According to officials, vehicles that do not meet the utilization criteria in the Air Force’s algorithm are subject to the agency’s justification process, the results of which are stored in the agency FMIS. However, we found that the Air Force’s FMIS does not include information on all agency vehicles. Agency officials said justifications for these 413 vehicles are not stored in the Air Force’s FMIS and would be difficult to locate because these vehicles are used by the Air National Guard, which has its own justification process. However, Air Force is administratively responsible for these vehicles, according to agency officials. BIA officials could not readily provide the justifications for 282 vehicles that did not meet utilization criteria. The agency paid $1.2 million to GSA in fiscal year 2014 for these vehicles. According to these officials, justifications are documented via e-mail, and it would be very challenging to search e-mail for these records as there was no universal format. Moreover, BIA officials said some of the justifications were reviewed by a fleet manager who left the agency, and they were unsure how to retrieve records from that individual’s e-mail account. Interior officials stated they will replace BIA’s e-mail process with a standardized form accessible through Interior’s FMIS in fiscal year 2016. NASA was able to provide the justifications for all of its vehicles where it applied utilization criteria and the criteria were not met. NASA policy requires NASA centers to use Vehicle Utilization Review Boards (VURB) to approve or deny justifications for vehicles that do not meet utilization criteria. All vehicles that are reviewed by VURBs have an individual justification form, and all VURBs submit a summary document of their reviews to headquarters officials. NPS officials could not readily provide justifications for 645 vehicles because those justifications were not stored within the agency’s FMIS. The agency paid $2.5 million to GSA in fiscal year 2014 for these vehicles. While NPS designed its justification forms to be stored within Interior’s FMIS, we found none of these forms had been uploaded to the system. In order for NPS officials to determine which of its vehicles had been justified, they would need to locate these 645 forms, which officials said were stored in field offices. Interior officials told us they were unsure why some of NPS’ forms were not stored in the agency’s FMIS but they plan to upload the forms to the system. VHA was unable to locate justifications for 181 vehicles for which it had data indicating that the vehicle had not met VHA’s utilization criteria. The agency paid $0.6 million to GSA in fiscal year 2014 for these vehicles. According to VHA officials, justifications are stored with local fleet managers and are not readily accessible to headquarters officials. Agency officials said that the justification system was developed to assist local fleet managers and that previously, it was not necessary for headquarters to access these records. The finding that four of the selected agencies’ processes did not allow them to consistently determine which of their vehicles are justified is consistent with the findings of other agencies that have examined their vehicle fleets. For example, in 2014 the Inspector General (IG) for the Department of Homeland Security (DHS) reported that DHS could not determine whether or not certain vehicles that did not meet the agency’s utilization criteria were justified. The IG estimated DHS’s cost to operate these vehicles in fiscal year 2012 was between $35.3 and $48.6 million. As a result of our review, two of the selected agencies—BIA and NPS— have plans to modify their systems accordingly to provide accessible justification documentation. Without readily available justification documentation, agencies are limited in their ability to exercise oversight over vehicle retention decisions, including how many vehicles—if any— should be eliminated. Further, the FPMR do not specifically require that agencies document all of their justifications in writing or store the justifications in a readily accessible location. Federal internal control standards on record keeping and management call for the accurate and timely recording of transactions, such as justification decisions and call for the documentation to be readily available for examination. We found that without such readily available documentation, four of the five selected agencies in our review could not determine whether they had justified some of their vehicles and whether any of those vehicles should be eliminated from agency fleets. According to GSA officials, the agency has not reviewed the FPMR to determine if the regulations should be amended to be more specific about vehicle justification documentation, and they have no plans to do so. As a result, GSA may be missing an opportunity to help ensure that agencies are appropriately justifying all vehicles in their fleet and determining if their leased-vehicle fleets contain vehicles that should be eliminated. In addition to the vehicles where agencies could not locate justifications in a timely manner, three agencies kept vehicles that did not pass their justification process. The FPMR do not require agencies to take any action for unjustified vehicles, which are vehicles that neither meet the agency’s utilization criteria nor pass the justification process. However, federal internal control standards call for agencies to be accountable for stewardship of government resources. All five selected agencies have established approaches to address unjustified vehicles, which can include placing them into a shared pool, transferring them to a new mission, rotating them with higher-mileage vehicles, or eliminating them from their fleet. All five selected agencies took actions to reduce vehicles that did not meet utilization criteria or pass the justification process; yet three agencies cumulatively retained over 500 such vehicles, paying GSA $1.7 million for these vehicles in fiscal year 2014. See table 6. Specifically, we found that: NPS retained 109 vehicles that did not meet agency-defined utilization criteria and did not pass the agency’s justification process. The agency paid GSA $0.4 million in fiscal year 2014 for these vehicles. VHA retained 393 vehicles that did not meet agency-defined utilization criteria and did not pass the agency’s justification process. The agency paid $1.3 million to GSA in fiscal year 2014 for these vehicles. VHA policy does not require justification for all vehicles that do not meet utilization criteria. As a result, these 393 vehicles were never subject to a justification process even though they did not meet utilization criteria. VA officials said that returning vehicles to GSA would not lead to cost savings because GSA will continue to charge the agency for the vehicle until a new lessee is found. GSA officials said that only in cases where a large number of vehicles are prematurely returned at once does GSA continue to charge the leasing agency for the vehicles. VA officials stated that they do not believe that this policy is applied consistently. NASA retained one vehicle that did not meet agency-defined utilization criteria in fiscal year 2014 and did not pass the agency’s justification process. NASA officials explained that the vehicle was incrementally removed from service in fiscal year 2015 to ensure that mission requirements would not be negatively impacted. NASA has since returned its unjustified vehicle to GSA. While these findings are not generalizable, they are consistent with several findings from agency inspectors general that have reported agencies keeping vehicles even though they did not meet agency’s utilization criteria or pass the agency’s justification process. For example, in 2013 the DOE IG found one DOE component retained 234 vehicles— 21 percent of the component’s fleet—even though the vehicles did not meet utilization criteria and users had not submitted justification for their retention. Similarly, in 2015 the DHS IG found that the Federal Protective Service had not properly justified administrative vehicles and spare law enforcement vehicles in its fleet, valued at more than $1 million fiscal year 2014. Internal controls call for agencies to be accountable stewards of government resources. However, agency processes do not always require that every vehicle undergo a justification review or that vehicles are removed if they do not pass a justification review. Agency processes that do not facilitate the removal of underutilized vehicles hinder agencies’ abilities to maintain efficient vehicle fleets. Without processes to ensure that underutilized vehicles are consistently removed, agencies may be foregoing opportunities to reduce the costs associated with their fleets. The cost savings achieved by eliminating unjustified vehicles may be less than the cost paid to GSA because agencies may need to spend resources on alternative means to accomplish the work performed by these vehicles. For example, while an agency would save the monthly cost of leasing an eliminated vehicle, another vehicle in the agency’s fleet may need to travel more miles if it performs functions previously performed by the eliminated vehicle. This may increase leasing costs for the remaining vehicle. Nonetheless, by not taking corrective action, agencies could be spending millions of dollars on vehicles that may not be needed. Given the approximately $1 billion dollars that are spent annually on leased federal vehicles and the government-wide emphasis on good fleet management, it is critical for agencies to have reliable data and sound management practices. While GSA has taken a number of positive steps to assist agencies in managing their fleets, there are more actions it can take. For example, GSA’s current 9,999 odometer reading warning allows for large odometer discrepancies before warning users of a potential error, leading to potentially inaccurate odometer readings that can result in potentially inaccurate billing and additional staff time for subsequent correction. Evaluating the current warning and adjusting it accordingly could help improve data accuracy and therefore help reduce these costs. Additionally, while customers report that utilization-related conversations with FSRs are helpful, GSA does not have a mechanism to know the extent to which these conversations are taking place as expected. As a result, GSA may be missing a potential opportunity to help agencies ensure that their leased fleet is the optimum size. Furthermore, while the FPMR provide some guidance to federal agencies on how to justify vehicle utilization, they do not require agencies to have clearly-documented justifications available for examination or to have any mechanism for ensuring that these justifications take place. We found shortcomings for almost all of the agencies in our review in these areas. Additionally, findings from Inspectors General have identified similar concerns at other agencies, indicating that a lack of readily available justifications may extend beyond the agencies covered under this review. GSA has not examined these regulations. As a result, GSA may be missing an opportunity to help ensure that agencies are appropriately justifying all vehicles in their fleet and determining if their leased vehicle fleet contains vehicles that should be eliminated. In the absence of an FPMR requirement, federal internal control standards can help agencies use their authority to be responsible stewards of government resources. However, because some agencies’ processes do not consistently facilitate the identification of underutilized vehicles, these agencies may not know which vehicles should be eliminated. Specifically, without readily accessible written justification, agencies are limited in their ability to exercise oversight over key vehicle retention decisions for vehicles that cost millions of dollars annually. Additionally, some agencies have not eliminated or reassigned vehicles that did not meet utilization criteria or pass a justification review. By not taking corrective action, agencies could be spending millions of dollars on vehicles that may not be needed. To help improve the accuracy of Drive-thru data to allow agencies to better manage their leased-vehicle fleet data, we recommend that the Administrator of GSA evaluate the 9,999-mile/month electronic safeguard for Drive-thru odometer readings to determine if a lower threshold could improve the accuracy of customer data and adjust this safeguard accordingly. To provide better assurance that Fleet Service Representatives (FSR) are having conversations with leasing customers about utilization in accordance with GSA expectations, we recommend that the Administrator of GSA develop a mechanism to help ensure that these conversations occur. To help strengthen the leased-vehicle justification processes across federal agencies, we recommend that the Administrator of GSA examine the FPMR to determine if these regulations should be amended to require that vehicle justifications are clearly documented and readily available, and adjust them accordingly. To improve the justification process, we recommend that the Secretary of the Department of Defense should direct the Secretary of the Air Force to modify the current process to ensure that each leased vehicle in the agency’s fleet meets the agency’s utilization criteria or has readily available justification documentation. To improve their justification process, we recommend that the Secretary of the Department of Veterans Affairs should direct the Under Secretary for Health to modify the current process to ensure that each leased vehicle in the agency’s fleet meets the agency’s utilization criteria or has readily available justification documentation. To facilitate the elimination of unnecessary vehicles, we recommend that the Secretary of the Department of the Interior should direct the NPS Director to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or passed the justification process. This corrective action could include (1) reassigning vehicles within the agency to ensure they are utilized or (2) returning vehicles to GSA. To facilitate the elimination of unnecessary vehicles, we recommend that the Secretary of the Department of Veterans Affairs should direct the Under Secretary for Health to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or passed the justification process. This corrective action could include (1) reassigning vehicles within the agency to ensure they are utilized or (2) returning vehicles to GSA. We provided a draft of this report to GSA; to the Departments of Defense, Interior, and Veterans Affairs; and to NASA for review and comment. GSA and the Departments of Defense, Interior, and Veterans Affairs provided written comments in which they concurred with our recommendations. These comments are reproduced in appendixes III-VI. NASA provided no comments. In written comments, GSA stated that it agreed with the three recommendations directed to it and is developing a comprehensive plan to address them. In written comments, the Department of Defense (DOD) concurred with the recommendation directed to it and stated that it would publish a policy memorandum in the second quarter of fiscal year 2016 that will direct DOD fleet managers to ensure that each leased vehicle in the agency’s fleet meets agency utilization criteria or has readily-available justification documentation. If implemented as planned, this action should meet the intent of the recommendation. In written comments, Interior concurred with the recommendation for NPS to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or successfully passed the utilization justification process and specified the actions that NPS, as well as BIA, are implementing or planning to enhance their leased-vehicle programs. For example, Interior stated that NPS is implementing actions to ensure vehicle justifications reside in the Department’s Financial and Business Management System and plans to review the current guidelines to establish reliable and consistent utilization metrics. In addition, Interior stated that NPS plans to develop processes to ensure justifications are on file and rotate underutilized vehicles to locations to increase the efficiency and effectiveness of its fleet. If implemented as planned, these actions should meet the intent of the recommendation. Interior also stated that BIA is establishing an electronic document repository to ensure accessibility of fleet management documents, transitioning to standard fleet-utilization forms, and conducting a leased-vehicle miles-driven utilization analysis to determine an annual mileage minimum requirement. In written comments, VA concurred with the two recommendations directed to it and specified the actions it has taken or plans to take to address them. Related to the recommendation to modify their current process to ensure that each leased vehicle in the agency’s fleet meets the agency’s utilization criteria or has readily-available justification documentation, VA stated in its letter that VHA agrees that GSA-leased vehicles should either be used frequently enough to achieve the agency’s utilization criteria or have readily-available justification documentation. VA stated that, subsequent to our review, VHA’s fleet program took action to ensure local fleet management programs correct deficient documentation on vehicles identified in our review that did not meet the agency’s utilization criteria. Specifically, VA stated that VHA’s fleet program requested Veterans Integrated Service Networks to solicit local fleets to justify any vehicles that had insufficient justifying documentation during our review. In addition, to help ensure that local fleet management programs are complying with current documentation requirements and to improve oversight of the programs, VA stated that the Office of Capital Asset Management Engineering and Support would issue written reminders to local fleet programs and monitor and audit utilization reports. VA included a target completion date of January 2017. If implemented as planned, these actions should meet the intent of the recommendation. Related to the recommendation to take corrective action to address each leased vehicle that has not met the agency’s utilization criteria or passed the justification process, VA concurred and stated that this corrective action could include reassigning vehicles within the agency to ensure they are utilized or returning the vehicles to GSA. VA stated that VHA would take corrective action and included a target completion date of January 2017. If implemented as planned, these actions should meet the intent of the recommendation. While VA agreed with our recommendations to address underutilized vehicles, it disagreed with our conclusion that 14 percent of VHA’s leased fleet is “unneeded, costing taxpayers an unnecessary $3 million.” Based on our analysis of VA data, our report found that VHA paid $3 million in fiscal year 2014 for leased vehicles that did not meet utilization criteria and did not have readily available justifications. These vehicles accounted for 14 percent of the selected vehicles in VHA’s leased fleet. We did not state that these vehicles were unneeded. We did state, however, that without justifications or corrective actions, agencies could be spending money on vehicles that may not be needed. As discussed above, VA described actions taken subsequent to our review to address some of the issues we identified, and also reported in its written comments that the most recent data show that less than 1 percent of VHA’s total current leased vehicle fleet may not be fully utilized. This number reflects two differences from our calculation. First, in general comments on the draft report, VA stated that there are now 381 vehicles for which it cannot determine if the vehicle met utilization criteria, if the vehicle had a justification, or if VA is aware that the vehicle did not meet utilization criteria or have a justification. Based on our analysis, we found 917 such vehicles among VHA’s selected leased vehicle fleet in fiscal year 2014, a difference of 536 vehicles. As described in the report, we analyzed fiscal year 2014 data for five selected agencies because it was the latest completed fiscal year at the time of our review. We agree that the actions taken subsequent to our review, as well as VHA’s planned actions, should address the issues we identified and should meet the intent of the recommendations. However, we have not reviewed the documentation nor verified the data on which VA’s new percentage is based. Second, VA’s new percent is the percentage of all of VHA’s leased vehicle fleet, not the percentage of selected leased vehicles that were part of our review. For the five agencies in our review, all of our percentages were calculated as a percentage of the number of leased vehicles that were selected for review, not of the agency’s entire leased vehicle fleet. As discussed in more detail in the report, we did this to consistently exclude vehicles such as tactical or law-enforcement vehicles. Thus, we continue to believe that our conclusion is valid. GSA, Interior, and VA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the Administrators of GSA and NASA, and the Secretaries of the Departments of Defense, Interior, and Veterans Affairs. In addition, this report will be available for no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. We conducted a review of the utilization of GSA’s leased vehicles. This report assesses: (1) the extent to which GSA data on leased vehicles are reliable, (2) GSA’s role in identifying and reducing underutilized leased vehicles, and (3) the extent to which the assessment processes used by selected federal agencies facilitate the identification and removal of underutilized leased vehicles, and any cost savings that could be achieved by reducing any underutilized vehicles. To determine the extent to which GSA’s data for leased vehicles are reliable, we examined the reasonableness of data contained in GSA’s internal fleet management database (Drive-thru) and the Federal Automotive Statistical Tool (FAST), a web-based reporting tool co- sponsored by GSA and the Department of Energy. For the purposes of this review, reliability is defined by two key components: reasonableness and indications of accuracy. We also tested a selection of Drive-thru data (reflecting approximately 162,000 vehicles) for indications of accuracy. GSA is responsible for the reasonableness of data in Drive-thru and FAST. We used three key sources to develop a standard for reasonableness, as there is currently no single federal criterion for a measurement of reasonableness. The three key sources included (1) prior GAO work that provided guidance on how to assess the reliability of data; (2) OMB’s Circular A-123, which defines management’s responsibility for internal controls in the federal government; and (3) GAO’s Green Book, which provides standards for internal control in the federal government. The key practices surrounding the standard of measurement that we developed for reasonableness are: electronic safeguards, such as error messages for out of range entries or inconsistent entries; the extent to which GSA reviews data samples to ensure that key data fields are non-duplicative and sensible; and the clarity of the guidance that GSA provided to ensure consistent user interpretation of data entry rules. As agencies are responsible for the accuracy of data in FAST, not GSA, we only examined Drive-thru for indications of accuracy. We focused on approximately two dozen data elements contained in the Fuel Use Report and the Inventory Report as these related most to costs associated with utilization and federal fleet reporting. To this end, we requested data from GSA for all GSA-leased vehicles that were continuously leased by the same agency from January 1, 2015, through May 21, 2015. We requested continually leased vehicles because we anticipated making month-to-month data comparisons. However, this historical comparison was not feasible as GSA does not store some historical data in its Fleet Management Information System database, which provides information to Drive-thru. Therefore, the inventory data pulled from GSA’s database were a “snapshot” of the federal fleet as of May 21, 2015, although the fuel data reflected the months of January-April 2015. Once the data were obtained, we conducted a variety of logic checks to locate any anomalies that might provide insight into the extent which GSA ensures the accuracy of Drive-thru data. For example, one of the logic checks we performed on these data included counting vehicles and determining whether at least one purchased fuel type over a 4-month time period failed to match the vehicle’s fuel type (accounting for vehicles that could potentially use more than one fuel type). This logic check was performed to determine how often, if at all, fuel was erroneously coded at the fuel pump. For objectives 2 and 3, we judgmentally selected five federal vehicle fleets from five federal agencies, including the U.S. Air Force (Air Force); U.S. Department of the Interior’s National Park Service and Bureau of Indian Affairs; National Aeronautics and Space Administration; and U.S. Department of Veterans Affairs’ Veterans Health Administration. We made our selection based on the following criteria: (1) varying fleet sizes, but none smaller than 1,000 vehicles; (2) a combination of military and civilian fleets; (3) a combination of fleets with mileage-based utilization levels above and below federal mileage-based utilization guidelines; (4) fleets that had not been audited by an organization other than GAO within the last 3 years; and (5) other considerations such as use of telematics and adoption of utilization criteria other than the mileage guidelines in GSA regulations. We selected these fleets, which according to GSA in 2014 ranged in size from 1,574 to 13,954 vehicles to broadly discuss the experiences and practices across a section of the federal fleet. These results are not generalizable to their overarching agencies or other federal agencies. To determine what GSA’s role is in identifying and reducing underutilized leased vehicles, we reviewed and analyzed relevant federal laws, regulations, executive orders, and GSA guidance to federal agencies for preparing VAM submissions. We described GSA’s role based on the responsibilities delineated in those documents. We also interviewed GSA officials, including Fleet Service Representatives (FSR) to better understand the role they play when working with federal agency fleet managers to identify underutilized leased vehicles. To corroborate information that GSA officials told us about FSRs speaking with their agency fleet managers at least once a year to assist in identifying underutilized leased vehicles and to determine any value that fleet managers assign to these conversations, we administered a non- generalizable, mixed-method questionnaire to 68 federal agency fleet managers. To ensure that our questions were meaningful and that we received accurate survey data, we pre-tested our survey with four representatives from four of our selected agencies. Using GSA’s Drive- thru data, we selected fleet managers for our five selected federal agencies who were responsible for at least 20 GSA leased vehicles. Through interviews with agency officials and FSRs, we learned that the contact information in Drive-thru was not sufficiently reliable for our purposes. Specifically, two of four FSRs that we spoke with and officials from two selected agencies reported that Drive-thru does not contain reliable contact information for individuals who would have conversations with FSRs. These officials reported that some of the contacts in Drive-thru were actually end-users, such as contractors. In other cases, the contact information was outdated. To address this, we requested that the selected federal agencies provide us with lists of current fleet managers within their agencies, and we matched those names to the list of fleet managers from the Drive-thru data. Agencies that were unable to provide independent lists of fleet managers verified which individuals from the Drive-thru data were in the fleet manager’s role at their agency and would be the appropriate individuals with whom to discuss utilization. This matching and verification process brought the survey selection pool to 114 fleet managers, yielding a reasonable number of contacts for BIA, NASA, and NPS—given their respective fleet sizes. However, our matching and verification process resulted in four fleet managers for Air Force and 80 for VHA. Since other fleet managers on Air Force’s list of current fleet managers met our survey pool parameters, we took a random sample of 16 fleet managers to add to the four we identified during the matching and verification process. Also, to avoid over- representing VHA, we randomly chose one fleet manager from 19 Veterans Affairs’ regions. We sent the survey to a total of 69 fleet managers as follows: 12 at BIA; 12 at NPS; 6 at NASA; 20 at Air Force ; and 19 at VHA. However, during the survey period, Air Force informed us that one of the selected fleet managers’ roles no longer included responsibilities for GSA-leased vehicles. Therefore, the total number of selected fleet managers in the survey pool totaled 68. Fifty one of the 68 fleet managers completed our survey, yielding a 75 percent response rate. As noted in our report, findings from this survey effort are not generalizable. To determine the extent to which the assessment processes used by selected federal agencies facilitate the identification and removal of underutilized leased vehicles, we reviewed and analyzed: pertinent federal laws and regulations; GSA guidance that described the VAM process; and internal policies and procedures that the selected federal agencies use to identify underutilized vehicles in five fleets, such as fleet handbooks; and interviewed officials from GSA and the five federal agencies about the agencies’ responsibilities in identifying underutilized leased vehicles. We then compared these processes to federal internal control standards related to record keeping and management as well as stewardship of government resources, as described in the 1999 Green Book. To calculate the costs of the vehicles involved in these processes, we conducted a multi-step analytical process. First, we asked GSA to provide data on passenger vehicles and light trucks that were continuously leased from GSA during fiscal year 2014 (i.e., from October 1, 2013-September 30 2014, inclusive) for the five selected federal fleets. Table 7 shows how we defined passenger vehicles and light trucks for the purposes of this review. We focused on vehicles that GSA leased on a continuous basis (i.e., to a single agency) for at least fiscal year 2014 so that the agencies were fully accountable for the selected vehicles’ mileage over the entire fiscal year 2014 time period. We scoped our work to include light trucks and passenger vehicles because they comprise the majority of GSA’s continuously leased fleet at 65 percent and 27 percent, respectively. We also asked GSA to exclude tactical, law-enforcement and emergency- responder vehicles from the selected vehicle population, as well as vehicles located outside of the continental U.S. We made these exclusions because, according to GSA officials, some agencies did not want law enforcement data, for example, released outside of GSA because it could be considered sensitive. In addition, we needed to develop a manageable, selected population given the time resources needed to investigate each vehicle. After receiving the data from GSA, we conducted various analytical tests to develop a dataset that was free from detectable errors. For example, we examined data on current and previous monthly odometer readings. We then determined which vehicles in the dataset had a current monthly odometer reading that was lower than the previous month’s odometer reading. This allowed us to determine which vehicles likely had errors associated with their end-of-fiscal year mileage—allowing us to remove them from the population of analysis. We also analyzed over 15,500 fiscal year 2014 vehicle records from the five agencies that we reviewed. In total, selected vehicles from these agencies accounted for about 8 percent of the federally leased fleet, although the findings associated with this selection are not generalizable. Next, we determined which selected passenger vehicles and light trucks at each agency did not meet the miles-traveled guidelines in the Federal Property Management Regulations in fiscal year 2014 (12,000 miles and 10,000 miles, respectively). We then sent a list of the selected vehicles that had not met the miles-traveled guidelines to each agency and requested that they group the vehicles into one of the categories described below and depicted in figure 1: Group 2: No longer leased by the agency as of May 21, 2015; Group 4: Met a mileage-based utilization criteria defined by the Group 5: Met a non-mileage-based utilization criteria defined by the Group 6: Had a written justification in lieu of meeting the utilization criteria that the agency defined; Group 8: Was repurposed, given additional tasks, or reassigned within the agency during fiscal year 2015; and Group 9: Was retained beyond May 21, 2015, despite not meeting agency-defined utilization criteria, possessing a written justification for retention, or being given other tasks. We also asked agencies to identify vehicles that they could not categorize and reasons why—such as vehicles’ lacking readily auditable documentation, including information on whether the vehicle met the agency-defined utilization criteria in fiscal year 2014 (Group 3) and written justification for retaining vehicles that did not meet the agency-defined utilization criteria (Group 7). As these two groups—and vehicles in Group 9—stem from insufficient agency processes to identify and remove leased vehicles, we focused on determining the costs associated with the vehicles in these groups. Agencies were responsible for categorizing each of the vehicles that GAO provided to them. We provided the agencies with each vehicle’s license plate number, VIN number, make, model, and other identifying information to assist in this process. We did not verify whether agencies categorized vehicles correctly, as some of the information necessary for these categorizations was contained within agency systems and records (for example, if the vehicle met an agency-defined criteria or if the vehicle was repurposed). However, to evaluate the overall reliability of agencies’ vehicle justification, we selected a small sample of vehicles from each agency and then requested written justifications from each of those agencies that reported that they had written justifications for vehicles. We removed vehicles from the selected population if agencies reported that the vehicle should have been excluded from the review (for example, vehicles that agencies reported were law enforcement vehicles but not labeled as such in GSA’s system). We also removed vehicles if the VIN number that the agency provided did not match the VIN from the original information that GSA provided and vehicles that agencies categorized in more than one group, among other data-cleaning efforts. We determined the cost paid to GSA for each vehicle in each of the 9 groups using data from GSA. For each vehicle, we summed the following: the vehicle’s fiscal year 2014 mileage rate multiplied by the total number of miles the vehicle traveled in fiscal year 2014; per-mile costs for additional equipment multiplied by the total number of miles the vehicle traveled in fiscal year2014; the fixed monthly mileage rate for additional equipment multiplied by 12 (for the 12 months of the fiscal year); and any flat monthly rate charges multiplied by 12 (for the 12 months of the fiscal year). These costs represent the amount an agency paid to GSA for each vehicle in fiscal year 2014. However, these costs do not include other costs incurred by the leasing agency, such as the salaries of their fleet managers or the costs to garage the vehicles. Also, we did not have information on the opportunity costs of alternatives to replacing these leased vehicles. For example, if a vehicle is removed from an agency’s fleet and another vehicle is used more frequently as a result, the agency would still pay for miles traveled or trips made by the other mode of transportation. Therefore, the costs associated with the groups are annual costs paid to GSA, and an undetermined percentage of these costs would reflect actual cost savings if vehicles were removed. We conducted this performance audit from February 2015 to January 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to NASA policy, each NASA center should conduct an annual review of fleet utilization during the third quarter of each fiscal year. The review first identifies vehicles that fail to meet the minimum utilization goals, also called the “utilization target point.” The target point is calculated by multiplying the average usage by 25 percent (0.25) for each vehicle type, such as sedans/station wagons, ambulances, intercity busses, and trucks with a gross vehicle weight of less than 12,500 pounds. In fiscal year 2014, sedans and trucks less than 12,500 pounds were required to meet the mileage target points shown in table 9 at their respective centers: According to NASA policy, individual vehicles within each vehicle type whose range falls below the utilization target point will be added to the “utilization target list”. Programs, missions or departments with vehicles on the target list are required to submit a new justification form for each individual vehicle on the list for center review and retention approval. These justifications are then evaluated during the annual review process, with possible outcomes including reassignment within the center, exchanging the vehicle for a different type of vehicle that better suits the mission, or returning the vehicle to GSA. In addition to the contact named above, John W. Shumann (Assistant Director), Melissa Bodeau, Jennifer Clayborne, Monika Jansen, Davis Judson, Terence Lam, Malika Rice, Jerome Sandau, Alison Snyder, Michelle Weathers, Crystal Wesco, and Elizabeth Wood made key contributions to this report. | Federal agencies spent about $1 billion in fiscal year 2014 to lease about 186,000 vehicles from GSA. Assessing the utilization of leased vehicles is important to agency efforts to manage their fleet costs. GAO was asked to examine federal processes for assessing the utilization of leased vehicles. This report addresses, among other objectives, (1) GSA's role in identifying and reducing underutilized leased vehicles and (2) the extent to which the processes used by selected federal agencies facilitate the identification and removal of underutilized leased vehicles, and any cost savings that could be achieved by reducing underutilized vehicles. GAO selected five agencies using factors such as fleet size, and analyzed over 15,500 fiscal-year 2014 vehicle records. At the five agencies, GAO surveyed fleet managers with at least 20 leased vehicles; reviewed fleet policies and guidance; and interviewed federal officials. These findings are not generalizable to all agencies or fleet managers. The General Services Administration (GSA) provides guidance to agencies to assist them in reducing underutilized leased vehicles. This guidance can be written (such as bulletins) or advice from GSA's fleet service representatives (FSR) to agency fleet managers. FSRs assist agencies with leasing issues, and GSA expects its FSRs to communicate with fleet managers about vehicle utilization at least annually. However, 18 of 51 fleet managers GAO surveyed reported that they had never spoken to their FSR about vehicle utilization. GSA has no mechanism to ensure these discussions occur and therefore may miss opportunities to help agencies identify underutilized vehicles. While the selected agencies—the Air Force, the Bureau of Indian Affairs (BIA), the National Aeronautics and Space Administration (NASA), the National Park Service (NPS) and the Veterans Health Administration (VHA)—took steps to manage vehicle utilization, their processes did not always facilitate the identification and removal of underutilized vehicles. Certain selected agencies (1) could not determine if all vehicles were utilized, (2) could not locate justifications for vehicles that did not meet utilization criteria, or (3) kept vehicles that did not undergo or pass a justification review. These agencies paid GSA about $8.7 million in fiscal year 2014 for leased vehicles that were retained but did not meet utilization criteria and did not have readily available justifications (see table). Of the selected agencies, NASA and VHA did not apply their utilization criteria to nearly 400 vehicles, representing about $1.2 million paid to GSA in fiscal year 2014. However, these agencies have taken steps to rectify the issue. The Air Force, BIA, NPS, and VHA could not readily locate justifications for over 1,500 leased vehicles that did not meet utilization criteria, representing about $5.8 million. BIA and NPS are planning action to ensure justifications are readily available in the future. As of May 2015, NPS and VHA had retained more than 500 vehicles—costing $1.7 million in fiscal year 2014—that were not subjected to or did not pass agency justification processes. While costs paid to GSA may not equal cost savings associated with eliminating vehicles, without justifications and corrective actions, agencies could be spending millions of dollars on vehicles that may not be needed. GAO recommends, among other things, that GSA develop a mechanism to help ensure that FSRs speak with fleet managers about vehicle utilization, that the Air Force and VHA modify their processes for vehicle justifications, and that NPS and VHA take corrective action for vehicles that do not have readily accessible written justification or did not pass a justification review. Each agency concurred with the recommendations and discussed actions planned or underway to address them. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As one of the largest repositories of personal information in the United States, IRS receives tax returns from about 116 million individual taxpayers who have wage and investment income and from approximately 45 million small business and self-employed taxpayers each year. IRS performs a variety of checks to ensure the accuracy of information reported by these taxpayers on their tax returns. These checks include verifying computations on returns, requesting more information about items on a tax return, and matching information reported by third parties to income reported by taxpayers on returns (i.e., document matching). IRS's document matching program has proven to be a highly cost-effective way of identifying underreported income, thereby bringing in billions of dollars of tax revenue while boosting voluntary compliance. IRC Section 6103, amended significantly by the Tax Reform Act of 1976, is the primary law used to restrict IRS's data-sharing capacity. The law provides that tax returns and return information are confidential and may not be disclosed by IRS, other federal and/or state employees, and certain others having access to the information except as provided in IRC Section 6103. IRC Section 6103 allows IRS to disclose taxpayer information to federal agencies and authorized employees of those agencies for certain specified purposes. Accordingly, IRC Section 6103 controls whether and how tax information submitted to IRS on federal tax returns can be shared. IRC Section 6103 specifies which agencies (or other entities) may have access to tax return information, the type of information they may access, for what purposes such access may be granted, and under what conditions the information will be received. For example, IRC Section 6103 has exceptions allowing certain federal benefit and loan programs to use taxpayer information for eligibility decisions. Because the confidentiality of tax data is considered crucial to voluntary compliance, if agencies want to establish new efforts to use taxpayer information, executive branch policy calls for a business case to support sharing tax data. USCIS is part of the Department of Homeland Security (DHS), which was established by the Homeland Security Act of 2002. USCIS is responsible for administering several immigration benefits and services transferred from the former Immigration Services Division of the Immigration and Naturalization Service. Included among the immigration benefits and services USCIS's offices oversee are citizenship, asylum, lawful permanent residency, employment authorization, refugee status, intercountry adoptions, replacement immigration documents, family- and employment- related immigration, and foreign student authorization. USCIS's functions include adjudicating and processing applications for U.S. citizenship and naturalization, administering work authorizations and other petitions, and providing services for new residents and citizens. USCIS's employees who review immigration benefit applications and determine if they should be approved are its adjudicators. USCIS's fraud detection units and Fraud Detection and National Security immigration officers in the districts, service centers, and asylum offices detect potential fraudulent applications and any trends or patterns that suggest potential fraud. USCIS staff work with applicants through the adjudicatory process beginning with initial contact when an application or petition is filed, through the stages of gathering information on which to base a decision. This contact continues to the point of an approval or denial, the production of a final document or oath ceremony, and the retirement of case records. Under current legislative authority, USCIS is not authorized to receive taxpayer information from IRS directly. USCIS currently obtains self- reported personal and financial information provided by (1) businesses, religious organizations, non-profit entities and individuals applying to sponsor immigrant workers; (2) individuals applying to sponsor relatives; and (3) individuals applying to enter the country, extend their stay or obtain citizenship. USCIS also obtains information from third parties, not including IRS, to verify applicants’ self-reported data. Figure 1 illustrates the current lack of data verification between USCIS and IRS during the immigration application process. Data-sharing arrangements between agencies can take different forms. As used in this report, data sharing means obtaining and disclosing information on individuals between federal agencies (IRS and USCIS) to ensure taxpayers have met their tax obligations or to determine eligibility for immigration benefits. Table 1 lists different forms of data sharing, enabling authority, information gained, and related examples. An example of applicant-initiated sharing occurs via tax checks required by some states. A taxpayer may authorize a third party to receive his or her IRS tax return information via consent. According to an IRS official, many states may require consents to qualify for benefits or certain types of employment. For example, the state of Missouri requires applicants to be current on their state taxes before receiving a new professional license. An example of agency-initiated sharing is an arrangement between IRS and SSA for the Combined Annual Wage Reporting Program. SSA processes and maintains W-2 and W-3 information on employees. IRS maintains personal and financial information on employees. SSA and IRS conduct exchanges of information to ensure employers are submitting accurate wage information and to identify nonfilers. The agencies have a direct data- sharing arrangement. Research shows that certain data-sharing programs have value for increasing taxpayer compliance since these programs have identified discrepancies in income reporting amounts and, in some cases, enabled the assessment of additional dollars in unpaid taxes. For example, matching IRS’s unpaid assessment database with the Treasury’s Financial Management Service’s (FMS) records shows a substantial amount of money that could have been collected by either IRS or FMS. In particular, the Taxpayer Relief Act of 1997 allows IRS to continuously levy up to 15 percent of certain federal payments made to delinquent taxpayers. IRS’s continuous levy program adds tax debts to FMS’s program for recovering debts owed to federal agencies. For the levy program, FMS compares federal payee information from agency payment records with extracts of IRS’s unpaid assessments. We estimated that IRS could recover at least $270 million annually from about 70,000 delinquent taxpayers. In addition, our analysis of a match between FMS’s database on payments to contractors and IRS’s unpaid assessment database showed that about 33,000 contractors who received substantial federal payments from civilian agencies during fiscal year 2004 owed a total of more than $3 billion in unpaid taxes. We estimated that if FMS database deficiencies such as erroneous Taxpayer Identification Numbers (TINs) and invalid contractor names were corrected, FMS could have collected at least $50 million more than it did in fiscal year 2004. Although data-sharing arrangements can be useful, privacy advocates, lawmakers, and others are concerned about the extent to which the government can disclose and share citizens’ personal information, including sharing with other government agencies. Historically, lawmakers and policymakers have created legislation to address these concerns. For example, the Privacy Act of 1974 regulates the federal government’s use of personal information by limiting the collection, disclosure, and use of personal information maintained in an agency’s system of records. The Computer Matching and Privacy Protection Act of 1988 further protects personal information by requiring agencies to enter into written agreements, referred to as matching agreements, when they share information that is protected by the Privacy Act of 1974 for the purpose of conducting computer matches. User fees are collected from identifiable recipients of special benefits beyond those accruing to the general public, and the amounts are based on the recovery of costs of providing the service, the market value of goods, or may be set by legislation. Various user fee authorities and guidance exist which range from general to specific authority. Title V of the Independent Offices Appropriations Act of 1952 codified at 31 U.S.C. § 9701 established general authority to assess user fees or charges on identifiable beneficiaries by administrative regulation. Under this general authority, all fees collected would be deposited as miscellaneous receipts to the General Fund of the Treasury and their use would be determined through the annual appropriations process. Authority to assess user fees may also be granted to agencies through the enactment of specific authorizing or appropriations legislation, which may or may not authorize the agencies to retain and/or use the fees they collect. In setting certain user fees, agencies must follow either the general user fee statute 31 U.S.C. § 9701 or specific user fee statute. For example, IRS must follow IRC section 7528 in certain cases while USCIS adheres to the Immigration and Nationality Act, the general user fee statute, and DHS regulations which outline fees that are collected, the amounts, and by whom (see table 2). The Office of Management and Budget’s (OMB) Circular A-25, User Charges, establishes general federal policy and guidance for user fees assessed for government services by executive branch agencies. IRS’s chief financial office provides internal guidance on user fees, which provides examples on how to implement the OMB directives. USCIS does not provide internal guidance. According to USCIS officials, the agency’s fee setting is based upon cost studies and a full- fledged regulatory process under the Administrative Procedure Act, with the actual fees provided in regulations. IRS’s tax compliance efforts could benefit from data sharing with USCIS if immigration eligibility were changed to require business sponsors to show that they met tax filing and payment requirements to qualify to sponsor immigrant workers. In particular, IRS could benefit because businesses applying to sponsor immigrant workers that had not filed a tax return or paid taxes would need to come into compliance. IRS data can also enable USCIS adjudicators to make more accurate eligibility decisions by better identifying businesses that may not have met immigration eligibility criteria because they had unpaid assessments or did not file tax returns. Further, obtaining IRS data has the potential to improve the timeliness of benefit decisions by (1) decreasing rework/follow-up work and (2) potentially resulting in fewer applicants if benefits are contingent on having met tax filing and payment requirements. IRS could benefit from data sharing with USCIS if certain taxpayers, such as business sponsors who owe taxes, were required to be in compliance with tax filing and payment requirements to qualify for immigration benefits such as sponsoring immigrant workers. Our analysis of automated immigration records matched against IRS databases showed that 18,942 businesses applying to sponsor immigrant workers from 1997 through 2004 had $5.6 billion in unpaid assessments, as of December 2003 (app. II). Many were not paying their tax bill or making payments towards fulfilling their tax obligations. As of December 2003, business sponsors from our nationwide selection who were not in installment agreements with the IRS or otherwise making payments to IRS, for taxes due, had unpaid assessments totaling $3.7 billion. Although these businesses with past applications to sponsor immigrant workers would not be affected by a change to requirements for sponsoring workers since they have already received immigration benefits, USCIS officials said that businesses that apply to sponsor workers tend to do so in multiple years. If businesses were required to meet their tax obligations, to the extent that future business sponsors owe taxes, they would need to pay their tax bill or make payment arrangements with the IRS to come into compliance before becoming eligible to sponsor immigrant workers to enter the country. USCIS officials said a statutory change would be preferable to a regulatory change because, although they acknowledge no explicit prohibition exists in immigration laws against conditioning approval of employer petitions on their tax compliance, they have serious legal concerns about USCIS’s authority to issue such a regulation absent specific statutory authority. Although changing the eligibility requirement would likely help bring taxpayers with unpaid assessments into compliance, IRS would be unlikely to recover all taxes owed by the businesses. IRS officials commented that some businesses, even if they came forward to IRS, would not be able to repay their full debt. USCIS officials made a similar comment and added some businesses may decide not to apply for immigration benefits knowing they are not in compliance with tax filing and payment requirements. Further, either IRS would only be able to pursue collection for business sponsor cases that exceed thresholds IRS uses in determining how many cases to pursue or, if IRS took steps to collect taxes in all of these cases, it would be unable to work other cases. As we have reported previously, IRS has too many compliance cases to work. Immigration data appear less likely to be useful to IRS for identifying applicants who do not file tax returns. For instance, we identified 33 business/entity sponsors from our nonprobability sample who appeared to be unknown to the IRS because they did not show up in any of six different IRS databases. An IRS investigation of the 33 revealed no productive compliance leads. IRS determined most of the sponsors—businesses and religious organizations—were either tax exempt, had no filing requirement, or were no longer liable for the tax. However, these 33 cases were a small fraction of the almost 20,000 business sponsors that appeared to be unknown to IRS based on our nationwide selection of USCIS business sponsor applications (see app. II). In the mid-1980s, IRS and USCIS entered into a cost-reimbursable data- sharing arrangement under which USCIS shared immigrant data with IRS by completing IRS Form 9003. According to IRS officials, IRS used Form 9003 information to help identify whether individuals who filed for U.S. permanent residency had filed tax returns and properly reported their income. IRS and USCIS shared Form 9003 data for about 10 years but ended this arrangement in 1996. Much of the form 9003 immigrant data received from USCIS lacked SSNs--a primary mechanism IRS uses for tracking individual taxpayers, which made it increasingly difficult for IRS to use the data to determine whether individuals had filed taxes and properly reported income, according to IRS officials. Additionally, the costs associated with the data-sharing agreement escalated each year, to the point that, in IRS’s view, it was no longer cost effective. IRS data can enable USCIS adjudicators to make more accurate eligibility decisions by better identifying businesses that may not have met immigration eligibility criteria. Our matching of immigration and taxpayer data identified business sponsors that may not meet immigration financial feasibility and legitimacy tests because they have failed to file tax returns and/or pay all of their taxes. Sixteen percent of businesses from our nationwide selection (67,949 of 413,723 businesses) applying to sponsor immigrant workers did not file one or more tax returns at the time of their application to sponsor an immigrant worker between 1997 and 2004 (app. II). Twenty four percent of businesses (112 of 475 businesses) from our nonprobability sample that applied to sponsor immigrant workers did not file one or more tax returns at the time of their application to sponsor an immigrant worker between 2001 and 2003 (app. II). Five percent of sponsors from our nationwide selection (18,942 of 413,723 businesses) and 20 percent of businesses in our nonprobability sample (94 of 475 businesses) had unpaid tax assessments at the time of application. As of December 2003, for the nationwide results, the assessments totaled $5.6 billion and for the nonprobability sample results, the assessments totaled $39 million (app. II). Filing and paying taxes is an indicator that financial feasibility and legitimacy tests are met. Figure 2 shows matching results identifying business nonfilers and those with unpaid assessments from our nationwide selection. Immigration adjudicators use applicants’ self-reported personal and financial information plus third party data to make decisions about granting benefits to immigration applicants. For businesses applying to sponsor immigrant workers—employment-based applications— immigration adjudicators use financial information for evaluating two basic eligibility criteria for businesses sponsoring immigrants: (1) the sponsor’s financial feasibility and (2) the legitimacy of the sponsor’s existence. Financial feasibility refers to the sponsor’s ability to pay wages to or financially support the individual being sponsored. For example, if a company is sponsoring an immigrant for employment, that company must show that it has sufficient ability to pay the worker. Information on tax returns filed with IRS show income levels and these could be used to validate applicant-provided information. Legitimacy, in the case of employment-based petitions, refers to whether a sponsoring business or organization actually exists, has employees, and has real assets. Figure 3 depicts an overview of the adjudication process for employment-based applications. Since adjudicators receive self-reported—and sometimes false—personal and financial information from applicants, they sometimes obtain additional or different documentation from the applicant or third parties. For example, at the time of application or during the adjudicatory process, applicants may be required to provide additional documentation, such as tax returns from IRS or unofficial copies to verify financial information reported on immigration forms. However, immigration officials we spoke with in five field locations said applicants could alter or falsify those documents. According to USCIS officials, designing a data-sharing arrangement that includes verification of applicant-provided data during the adjudicatory process would be useful to USCIS adjudicators for the financial feasibility and legitimacy tests and afford more accurate immigration eligibility decisions. Additionally, with an eligibility change as discussed earlier, business sponsors would be required to file tax returns, pay amounts due, or make payment arrangements with IRS before qualifying for immigration benefits. This, in turn, could result in a higher likelihood of USCIS getting applications from business sponsors that have met their tax filing and payment obligations, thereby more likely meeting USCIS’s financial feasibility and legitimacy criteria. USCIS could then better assure that it was granting benefits to those business sponsors that accurately meet their criteria. USCIS Ombudsman and representatives of immigration advocacy groups have concerns about data sharing and immigration eligibility for business sponsors. Although the Ombudsman acknowledged the benefits of data sharing, he was concerned that another step in the immigration application process could be more cumbersome for business sponsors. He questioned the type of information IRS would share and said businesses in dispute with the IRS should not be prevented from applying for benefits while the dispute is being resolved. A representative from one advocacy group expressed several concerns on behalf of business sponsors including: Increased delays in the immigration process - from their perspective, any additional step in the application process could lengthen the time between when a business decides to sponsor a worker and obtaining USCIS’s approval. Problems with improving USCIS’s technological capabilities – according to the immigration advocate, USCIS is still mostly paper-driven and therefore it is questionable as to whether they could share data electronically. Special tax issues related to small businesses – many small business sponsors file tax extensions and, therefore, may not have readily available tax documents. Additionally, newer small businesses have no tax history. Adjudicator training – adjudicators need to understand how to read and interpret business tax documents because many, in the advocate’s opinion, have no training in dealing with those complicated documents. A representative of another immigration advocacy group also voiced the same concern about adjudicator training and added that implementing data sharing may be more useful in dealing with small- and medium-sized businesses because, based on their experience, larger businesses are less likely to be involved in immigration fraud. In addition, although not mentioned by USCIS officials, one potential unintended effect of data sharing might be an increased incentive to employ illegal workers. That is, if a business knew that its tax status would be checked by USCIS, or if it was required to meet tax filing and payment obligations before sponsoring immigrant workers, some businesses might decide to meet their labor force needs with workers not properly authorized to work in the United States. Smaller employers, who are more likely to have tax compliance problems according to IRS, may be more likely to make this choice than larger-sized businesses. According to USCIS officials, adjudicators would find IRS information on small businesses particularly useful since they are limited in their ability to assess these businesses’ financial feasibility and legitimacy. IRS has also encountered problems in corroborating financial information provided by small businesses and its research generally shows higher noncompliance among its small business population. Among our nonprobability sample, 25 of 43 business sponsors with unpaid assessments reported their net incomes as less than $10 million on USCIS employment-based applications. Additionally, USCIS has begun a series of benefit fraud assessments, which take a random sample of applications filed within certain immigrant and nonimmigrant categories and assess them for fraud by verifying key data reported. Based on the results of their fraud assessments, USCIS could focus on businesses that are more prone to fraud to match IRS data against in determining their financial feasibility and legitimacy. The type of taxpayer data USCIS adjudicators find useful could change under a USCIS proposal to revise regulations for employment-based immigration applications. USCIS officials are seeking to revise requirements since they believe that (1) establishing the validity of the sponsor is sufficient to meet immigration statutory requirements and (2) adjudicators were spending too much time trying to establish a sponsor’s income levels for well-known or established businesses. In the interim, in May 2004, USCIS issued updated guidance to adjudicators directing them not to reestablish a sponsor’s ability to pay with its USCIS Form I-485, Application to Register Permanent Residence or to Adjust Status (see app. II) to minimize processing delays. If this regulatory change is made, IRS taxpayer data could still help adjudicators establish the legitimacy or the bona fide nature of a business sponsor. According to USCIS officials, if the proposed regulation were adopted, USCIS would still need tax documents but would no longer need specific income information from tax returns. USCIS adjudicators would need tax return information such as whether the sponsor filed income tax returns, what years they filed, how many employees they had, and if they paid taxes, and they would need to further evaluate whether additional IRS information would be useful. USCIS would need specific income information from the tax return, such as adjusted gross income, only in cases where the initial information provided by IRS raised questions about the sponsor for USCIS (e.g., if the employer had unpaid assessments or was a nonfiler). As of July 2005, the proposed regulatory change was with DHS’s legal office, awaiting approval. USCIS officials said that access to IRS taxpayer data could also improve the timeliness of making benefit decisions primarily because it could decrease the rework and follow-up work with the applicant. For example, adjudicators said that if they could match applicant data against IRS data early in the review process, they would spend less time researching and following up on the validity of applicant-provided data (e.g., sending fewer RFEs to applicants; see figure 3 for an overview of the adjudicatory process). According to adjudicators, it could take as long as 12 weeks to receive responses from applicants for a certified IRS tax return, during which time, the application file sits on a “suspense” shelf, thereby extending the application processing time. Due to this time gap, in certain cases, background checks must be redone, which further lengthens the application-processing time. As we reported in May 2001, USCIS officials said that lengthy processing times had resulted in increased public inquiries on pending cases, which, in turn, caused USCIS to shift resources away from processing cases to responding to inquiries, adversely affecting processing time. Under a presidential initiative, USCIS has a 5-year, $540 million effort under way intended to reduce its backlog of applications, and ensure a 6-month average processing time per immigration application by the end of 2006. While USCIS has made progress on meeting most of its fiscal year 2004 and 2005 processing time goals, which range from 2 months to 20 months, its overall goal is to reduce processing time to 6 months in fiscal year 2006. This will be difficult because USCIS’s fiscal year 2004 and 2005 average processing times for some forms are more than twice as long as its fiscal year 2006 goal of a 6-month processing time. As Figure 4 shows, for three of the six forms we examined—the I-485 (Application to Register Permanent Residence or to Adjust Status), I-751 (Petition to Remove the Conditions on Residence), and N-400 (Application for Naturalization),— USCIS will need to cut its application processing times by more than 50 percent by fiscal year 2006 to meet its goal and thereby improve the timeliness of eligibility decisions. In his fiscal year 2006 budget request, USCIS’s director stated “Although we are on track to achieve the President’s backlog elimination mandate, we fully realize that funding alone will not enable us to achieve this goal…I have worked closely with the leaders in USCIS to continually review our processes, identify opportunities for streamlining and further improvement, and to implement meaningful change.” USCIS’s director is looking for opportunities to further streamline the adjudicatory process, and, as stated previously, USCIS adjudicators said that if they could match applicant data against IRS data early in the review process, they would spend less time researching and following up on the validity of applicant- provided data, which could reduce USCIS’s processing times for business sponsors’ applications. USCIS staff in headquarters said that changing immigration eligibility to require proof of tax filing and payment compliance for business sponsors may also deter businesses that are not filing and paying their taxes from submitting immigration applications because they would know that USCIS would deny their applications. If so, this could somewhat reduce the volume of applications received and, thereby contribute to quicker application processing times. A variety of options is available to IRS and USCIS for establishing and implementing data sharing. An applicant-initiated data-sharing relationship could be implemented under existing IRC authority through a taxpayer consent, whereby a taxpayer authorizes IRS to disclose his or her information to other agencies. Under an agency-initiated option, agencies could share information directly with each other in an electronic format, a process that is viewed as more efficient and desirable by USCIS and IRS officials. However, achieving such efficient data sharing may take time due to various legal, technological, and cost challenges that must be overcome. Establishing user fees to cover data-sharing costs are a way agencies can fund these various data-sharing options, but IRS lacks authority to include in its user fees the costs for bringing non-compliant business sponsors into compliance or to retain such fees. One option for establishing data sharing between IRS and USCIS is to use an existing authority within the Internal Revenue Code (IRC). USCIS is not currently authorized to directly receive taxpayer information for immigration eligibility decisions under IRC Section 6103. However, individual taxpayers could authorize IRS to disclose their tax return information to agencies like USCIS through a taxpayer consent submitted either via paper or electronically. The consent must be in the form of a separate written document pertaining solely to the authorized disclosure – with text appearing on a paper document or text appearing on a separate computer screen. The consent must include: (1) taxpayer identity information (i.e., name, address, SSN or EIN); (2) the designee to whom the disclosure is to be made; (3) the type of tax return (or specified portion of the return) or return information (and the particular data) that are to be disclosed; and (4) the tax year or years to be covered. The consent must be signed and dated by the taxpayer who filed the return or to whom the return information relates and IRS must receive the consent within 60 days of the taxpayer’s signature and date. Taxpayers use the Form 4506, Request for Copy of Tax Return, and the Form 4506-T, Request for Transcript of Tax Return, to authorize the paper consent. A tax return can show that the taxpayer filed a tax return and a tax transcript can show whether the taxpayer had a filing requirement, owed taxes, or paid taxes. Figure 5 is an overview of the way IRS processes paper consents, any costs to the taxpayer, and the average turnaround times. IRS paper consents permit the agency to verify for a third party whether a taxpayer has been filing required tax returns and paying his or her taxes. These verifications may be referred to by various names but can be generically called “tax checks.” They are often done in connection with a taxpayer’s application for benefits, licensing, or employment. Entities that use tax checks include mortgage institutions, major banks, financial institutions, state revenue agencies, and federal agencies. States are the biggest users of taxpayer information. According to an official with the Federation of Tax Administrators, many states have a taxpayer consent requirement, along with their own consent form, to require potential employees or contractors to consent to a state tax check as a condition of employment or to receive a license. When states verify individual and business compliance with state tax requirements, they are also able to determine federal compliance as permitted by IRC 6103(d), since states routinely receive extracts of IRS taxpayer information (See table 3 for examples of state and private entities that require tax checks.). For example, an Executive Order permits the state of Kansas to require a tax check in order for individuals and businesses to qualify for state employment or state contracts. State law also permits the rejection of a business’s application if the business owes the state taxes. Further, Kansas requires a tax check on all new and renewing vehicle dealership licenses. A March 2003 Kansas Legislative Audit Report found 514 motor vehicle dealers who owed $7 million in state sales tax. Before a business can apply or renew its dealership license at the state’s Division of Motor Vehicles (DMV), the business must present to the DMV proof that it fulfilled its state tax filing and payment requirements. According to an official with the Kansas Secretary of Revenue, for an active dealer, the threat of license revocation provided an incentive to bring non-compliant businesses into compliance. Businesses with unpaid assessments either paid their assessments or set up a payment plan. The state increased its car dealer tax compliance rate by 97 percent, according to an official with the Kansas Secretary of Revenue. As noted previously, IRS taxpayer consents can also be implemented electronically. Similar to the paper consent, the electronic consent must indicate taxpayer identity information, the designee to whom the disclosure is made, the type of return information that is to be disclosed, and the tax year or years covered. USCIS officials are agreeable to using taxpayer consents if they could be implemented electronically in a way similar to an electronic verification pilot project that was undertaken by IRS and the Department of Education (Education). In the pilot, students who wanted to qualify for financial aid authorized the IRS to release their tax information to their academic institutions via the Internet. After authorizing the release, IRS sent the individuals’ tax transcripts directly to the schools. This procedure then resolved any inconsistencies between information on the tax transcripts and on financial aid applications. One challenge agencies face in implementing data sharing based on taxpayer consents is the costs IRS and USCIS would incur to make data sharing work. Taxpayer consents can be costly and resource intensive when using paper, according to IRS officials. For example, IRS processed approximately 340,000 paper return and transcript requests at an IRS estimated cost of about $6.2 million (see table 4) during fiscal year 2004. Furthermore, the process can be paper intensive since IRS typically receives only hard copies of taxpayer consents. The agency only accepts paper requests via mail or fax, and no electronic versions of the paper copies (e.g., scanned copies cannot be e-mailed to IRS) are accepted. The process also can be time intensive. For example, the average processing turnaround time to process a copy of a tax return is 30 days; and the average turnaround time for a tax transcript is 10 to 30 days. USCIS officials are reluctant to use a paper consent because the agency is moving from a paper to an electronic environment. USCIS officials warned that requiring applicants to consent to a paper or electronic tax check would necessitate business process and procedural changes by USCIS, as well as an education process to the immigration community and the third parties that assist petitioners with their applications. USCIS officials said that requiring business and individual applications to undergo a tax check could strain already limited agency resources. USCIS application data showed the estimated annual volume for the six immigration forms we reviewed totaled about two million for fiscal year 2004. USCIS officials said implementing a tax check for employment-based business applications—estimated to be at least 180,000 for fiscal year 2004—would be less difficult to process than the six application forms that require financial information. Also, because the same business sponsor could file applications more than once in a year, depending on how USCIS implemented the requirement for a tax check, it could be valid for a certain period of time according to whatever policy USCIS established. Subsequently, the business would not have to undergo a tax check every time it sponsored a worker, which would not strain USCIS’s resources. Although use of tax data has helped some federal agencies better administer their programs, some are concerned that widespread use of taxpayer consents may undermine taxpayers’ right to privacy and, subsequently, voluntary tax compliance. The confidentiality of tax data is considered by many to be crucial to voluntary compliance. The Joint Committee on Taxation and Treasury’s Office of Tax Policy warn that the use of consents for programmatic governmental purposes potentially circumvents the general rule of taxpayer confidentiality because the taxpayer waives certain restrictions on agencies’ use of the data. When such waivers are granted, agencies are not obligated to follow the recordkeeping, reporting, and safeguard requirements that apply to tax data, although the less stringent requirements of the privacy act may still apply. IRS’s National Taxpayer Advocate stated in 2003 that “Widespread use of tax information by federal or state agencies will, in fact, undermine our tax administration system,” and that “A change in tax compliance of even one percentage point equates to an annual loss of over $20 billion of revenue to the federal government.” In its October 2000 report to the Congress on taxpayer confidentiality, Treasury’s Office of Tax Policy recommended that before a government program is allowed to use taxpayer consents, the requesting agency should first conduct a statistical test match or a small-scale pilot. If that test or pilot demonstrates that the program’s need for information outweighs concerns about taxpayer privacy and voluntary tax compliance, then Treasury will determine whether the agency can proceed with a limited program using taxpayer consents or whether a legislative amendment should be sought permitting direct access. Another option for data sharing is direct agency-to-agency sharing. Such data-sharing arrangements are enabled by specific statutes or regulations and, in the case of electronic data matching, have written agreements between agencies. Education, SSA, and the Centers for Medicare and Medicaid Services (CMS) are examples of agencies that have existing data- sharing relationships with IRS using electronic media such as a magnetic tape. These agencies are able to share data with IRS because they were each given specific authorities under IRC Section 6103, which authorizes the disclosure of a taxpayer’s return information and tasks IRS with oversight of safeguards for taxpayer information. Further, agencies enter into a computer matching agreement (CMA), which outlines the purpose for the data-sharing relationship, the information to be shared, the method of sharing, the approximate number of records to be shared, the frequency of sharing, and the length of the data-sharing arrangement. The requesting agency is required to attach to the CMA an analysis that measures the costs and benefits associated with a data-sharing arrangement with IRS. Agencies must also provide IRS an annual report on their security safeguards that protect against any unauthorized access or disclosure of data received during the arrangement. As shown by the examples in table 5, each agency’s data-sharing relationship with IRS differs in terms of the number of records shared, the method and frequency of sharing, the annual cost to the agency, and the cost per record. Electronic Data Sharing between IRS and the Department of Education The Department of Education obtains the mailing addresses of taxpayers for use in collecting debt from students who have defaulted on loans. Under the Taxpayer Address Request program, as authorized by IRC Section 6103(m)(4), Education furnishes the name and SSN of each defaulted student to the IRS. IRS then matches the information to its records and provides Education the most recent address for the taxpayer. Education sends about 4.6 million records annually to IRS for matching. Electronic Data Sharing between IRS and the Social Security Administration SSA is using each section within IRC Section 6103 that authorizes the disclosure of taxpayer information by IRS to SSA for benefit eligibility purposes. For example, the Disclosure of Information to Federal, State, and Local Agencies, under IRC Section 6103(l)(7), enables SSA to request and use taxpayer information from IRS to determine the eligibility of applicants and recipients of Supplemental Security Income, the nation’s largest cash assistance program for the poor. SSA officials estimate a savings of approximately $48 million annually with this data-sharing relationship. In addition, with the Medicare Secondary Payer Program, as authorized by IRC Section 6103(l)(12), SSA, through information collected from employers of working-aged beneficiaries and Medicare-eligible spouses such as name and SSN, is able to avoid duplicate payments for services by identifying Medicare-eligible individuals who have primary coverage through a group health plan. This data-sharing relationship’s annual savings are estimated at $463 million. Electronic Data Sharing between IRS and the Centers for Medicare and Medicaid Services The most recent data-sharing relationship under IRC Section 6103 is between IRS and CMS in which CMS uses taxpayer information on a daily basis to determine eligibility for the Transitional Assistance Program, which provides up to $600 to individuals to purchase prescription drugs. IRC Section 6103(l)(19) authorizes IRS to disclose to CMS specified tax return information of applicants for transitional assistance to help CMS identify eligible applicants. Figure 6 describes how the data matching occurs. In fiscal year 2004, CMS sent IRS about one million records for matching, and this data-matching arrangement cost CMS approximately $130,000. Electronic Data Sharing via Transcript Delivery System IRS offers electronic service, or e-service, products that are accessed directly by authorized third parties such as tax practitioners and state revenue agencies. Available since July 2004, the Transcript Delivery System (TDS) allows authorized third parties to immediately, electronically access taxpayer transcript information like the return transcript, account transcript, record of account, and verification of nonfiling—products that otherwise are available via the paper consent Form 4506-T. Tax practitioners can use this service if the taxpayer authorizes disclosure via Form 2848, Power of Attorney, which must be on file with IRS before the practitioner can request and receive a taxpayer’s data. This type of disclosure is authorized via IRC Section 6103(e)(6). Only three states are currently using TDS, as authorized by IRC Section 6103(d). The TDS e-services are Internet based and authorized users can access them from any computer with minimal Internet capabilities. The authorized individual retrieving taxpayer information inputs the request, and the information is accessed immediately. For fiscal year 2004, IRS estimates that it cost about 4.8 cents to provide access to a transcript using TDS. Legal Challenge of Data Sharing Electronically sharing taxpayer information directly from IRS to USCIS without a taxpayer consent has a number of legal, technological, and cost challenges if it is used for immigration benefit purposes. In order to electronically share information without first obtaining taxpayers’ consent, IRC Section 6103 would need to be changed to authorize IRS to disclose taxpayer information directly to USCIS for immigration eligibility purposes. Over the years, a number of exceptions have gradually been added to IRC Section 6103 that allow access to taxpayer information. As mentioned previously, some are concerned that disclosing taxpayer data could affect the taxpayer’s right to privacy and, subsequently, undermine voluntary tax compliance. According to Treasury, the burden of supporting an exception to IRC Section 6103 should be on the requesting agency, which should make the case for disclosure and provide assurances that the information will be safeguarded appropriately. Table 6 lists the criteria Treasury and IRS have applied when evaluating specific legislative proposals to amend IRC Section 6103 for governmental disclosures. Technological Challenges of Electronic Data Sharing USCIS must also address a number of technological challenges to lay the foundation that would enable data sharing to take place between the two agencies. For example, in our July 2004 testimony, we reported that: USCIS does not maintain any automated financial data on applicants. Although USCIS automates certain personal information from benefit applications, such as an individual’s name and alien registration number, it does not automate any financial data that are reported on the benefit application or in accompanying documents such as tax returns. USCIS systems contain some inaccurate data. We and the Department of Justice’s Office of Inspector General have criticized USCIS systems because they contain some inaccurate data for identifying pieces of information that identifies applicants (such as immigrants’ addresses). USCIS and IRS databases do not always have a common numerical identifier for tracking individuals or businesses. USCIS uses alien registration numbers as tracking identifiers whereas IRS uses SSNs or EINs. Although USCIS’s systems capture SSNs/EINs if they are provided on applications, USCIS does not require them to be entered into its systems. Thus, even though business sponsors should all have SSNs (if sole proprietor) or EINs (if another form of business), USCIS may not have entered the number into automated databases and therefore cannot match directly to IRS records. These limitations in USCIS’s record keeping would make electronic sharing of data between USCIS and IRS less productive than it otherwise could be, regardless of whether the data sharing was done directly between the agencies pursuant to a change in IRC Section 6103 or whether done through taxpayer consents. USCIS officials recognize that these problems could interfere with achieving data sharing in the fully electronic environment that they hope to achieve for adjudicators, but if a policy decision was made to share data, they believe that some form of electronic data sharing could be achieved relatively quickly without major technological improvements. The officials believe that USCIS could develop interim solutions while other USCIS transformation activities are under way. For example, these officials said USCIS could modify its existing automated systems to add a SSN/EIN identifier, and could adopt procedures to ensure the identifiers are routinely entered in all locations. They view this as a business process and policy challenge rather than a technological challenge. They also believe they could link the existing USCIS record identifiers to SSNs and EINS to enable data sharing to take place with IRS relatively quickly. Officials did offer some cautions about how quickly even these changes could be implemented. For example, they said that although adding a SSN/EIN identifier to their existing systems may only take a few months, changing immigration forms and notifying adjudicators and the larger community about the change could take much longer. As mentioned previously, these officials contend that establishing a new immigration requirement on applicants for a tax check will necessitate business process and procedural changes by USCIS, as well as an education process to the immigration community and the third parties that assist petitioners with their applications, which could take years. Finally, these USCIS officials pointed out, and the agency’s fiscal year 2006 budget request reflects, that the agency’s current priorities fall into three areas: (1) enhancing national security, (2) eliminating the immigration benefit application backlog, and (3) improving customer service. They stated that implementing changes to enable data sharing with IRS might take a longer time because it is not one of the agency’s three main priorities. If data sharing were to occur, officials ultimately would prefer to have it integrated into a transformed USCIS information technology (IT) environment. Since July 2004, USCIS has taken a number of steps to improve IT capability. In March 2005, USCIS unveiled its IT transformation plan that USCIS describes as a large scale and complex program to increase capabilities, streamline processes and support the collection of service fees. As such, the overall planned IT upgrade includes changes, which will improve the agency’s overall IT environment as well as facilitate data sharing such as: Moving from a paper to an automated environment; Adding a unique identifier to track records for background checks; Implementing electronic adjudication whereby adjudicators will be able to review immigration applications and supporting evidence online, among the first increments of the IT upgrade. Of the many ongoing activities related to USCIS’s IT transformation, USCIS officials described three major projects under way to improve its ability to receive and share data within the agency as well as with other agencies: Data layer/repository – this project will present users with a consolidated system to access information from 63 USCIS systems rather than the current situation where users have to log onto separate systems to obtain data. This capability would be available to adjudicators and, eventually, to external users. Software updates – this project will upgrade, among other things, USCIS’s desktop and software capabilities, USCIS’s servers and network, and USCIS’s capability to support the new electronic processes. E-adjudication pilots – this project will allow paperless (electronic) adjudication for certain immigration forms. Initially, USCIS has plans to pilot green card replacement/renewal applications (Form I-90) and extension applications for temporary workers in specialty occupations (Form I-129 H1B). In the fall of 2004, USCIS officials anticipated implementing the e- adjudication pilots by mid-April 2005 and having the ability to receive data from IRS in June or July 2005. However, these projects have not gotten under way as scheduled; the start of the pilots is dependent on the data layer and software updates being in place. USCIS could not provide us with a completion date for the data layer and e-adjudication pilots due, in part, to uncertainty regarding future funding. USCIS expects to complete full implementation for its information technology transformation by fiscal year 2010. We are examining USCIS’s technological improvements as part of our ongoing work on immigration backlog and benefit fraud issues. Cost Challenges of Data Sharing Estimating the cost benefit associated with establishing and maintaining a data-sharing relationship can be complicated. One reason developing a cost estimate is difficult is because electronic methods of sharing data can vary, and different costs are associated with different methods of sharing. For example, USCIS may incur technological costs related to improving their IT capability to enable data sharing, which can be processed by either magnetic tape or computer file, each of which has different costs. However, some of the necessary IT improvements are already planned and would be funded as part of USCIS’s comprehensive upgrade of its IT systems if data sharing with IRS does not occur. Estimating the cost benefit is also complicated because of the difficulty both agencies may encounter in establishing costs for providing the service. The Computer Matching and Privacy Protection Act of 1988 states that no matching program can be approved unless the agency has preformed a cost-benefit analysis for the proposed matching program that demonstrates the program is likely to be cost effective. Similarly, Treasury's criteria for considering whether a statutory change should be made for the sharing of tax data stresses the importance of documenting whether a substantial benefit is likely and what the resource demands on IRS would be to support sharing the data. In our July 2004 testimony, we recommended that IRS and USCIS assess the benefits and costs of data sharing to enhance tax compliance and improve immigration eligibility decisions. IRS responded by stating it would study the issues and work with USCIS to identify possible solutions. DHS/USCIS agreed with our recommendation and said they were “exploring a technical capability for effectively and efficiently sharing data with the IRS on individuals who apply for immigration benefits.” In addition, in 2004, we reported IRS did not have a cost accounting system capable of providing reliable cost information, and in 2004, we also reported USCIS had insufficient cost data to determine the full extent of its operating costs. Finally, estimating the cost benefit is also complicated because of uncertainty regarding the net benefits that would be gained from data sharing. For example, IRS is unable to pursue all of the current leads that it receives from existing data corroboration efforts, like document matching. Therefore, to the extent that obtaining and analyzing additional data from USCIS developed more leads for possible enforcement actions, IRS likely would only be able to pursue the portion of cases that exceeds thresholds IRS uses in determining how many cases to pursue. Further, apparent noncompliance may not be substantiated. For example, some of those who appear not to have filed tax returns based on matching IRS and USCIS data may indeed have filed but were not found in IRS’s databases due to inaccurate information in USCIS files or otherwise not have a filing obligation. Of business sponsors with unpaid assessments, some portion likely would be unable to repay all taxes owed. From USCIS's perspective, although we found that many businesses may not have filed tax returns or may owe taxes, some of these situations may not be significant enough to affect a USCIS adjudicator's decision about their financial feasibility or legitimacy. For instance, some of the businesses applying to sponsor immigrant workers that owe taxes may not owe enough to raise doubts about their ability to pay the worker. This may be especially true for larger businesses. Under current statutes, USCIS likely would be able to increase its user fees to recover all costs associated with data sharing with IRS and to retain the fees, but IRS would not be able to charge user fees that would include costs to bring noncompliant business sponsors into compliance. Because IRS could not recover all costs, it might not realize an increase in net tax collections through data sharing with USCIS. Both IRS and USCIS currently have authority that states when and under what circumstances they can charge user fees and defines permissible uses of the funds. IRS is statutorily restricted to retaining no more than the $119 million in user fees annually. If the additional user fees to perform tax checks for USCIS business applicants seeking to sponsor workers generate additional funds that exceed IRS’s limit, the agency would be unable to retain the excess amounts. Further, IRS is limited to recovering the costs directly associated with providing a service to taxpayers. USCIS, on the other hand, does not have a limit on the amount of user fees it can collect and currently is authorized to collect and retain a user fee related to providing adjudicatory and naturalization services including compliance related costs. In 2004, IRS collected over $137 million in user fees for a wide range of services, including installment agreements, offers in compromise, and Freedom of Information Act (FOIA) requests. In fiscal year 2004, about 82 percent of all user fees collected by IRS were for installment agreements or employee plans and exempt organizations’ letter rulings and determination letters. The 1995 Treasury Appropriations Act specifies that IRS can keep an overall maximum of $119 million per year of the user fees it collects for specific purposes, with the rest of the user fees going into the Treasury general fund. However, statutory formulas also limit how much IRS can retain of certain individual user fees. Due to these individual user fee limits, in 2004, IRS retained about $90 million from the user fees collected (see table 7) while the remaining $48 million went to the Treasury general fund. According to USCIS officials, in 2004, USCIS collected $1.3 billion in user fees for administering a variety of immigration services and benefits. Altogether, the agency has about 60 user fees, which range from $2 to $585, except for the $1000 premium-processing fee for select employment based applications. With the exception of FOIA fees, which are retained by the Treasury, DHS retains all user fees collected. According to USCIS, the agency’s budget will be entirely fee based beginning in fiscal year 2007. IRS cannot collect and retain a user fee to support the compliance-related costs it incurs in connection with data-sharing activities. IRS currently only has authority to collect and retain a user fee related to the direct cost of providing the service—such as providing a copy of a tax return. If business sponsors were required to meet their tax obligations before they could sponsor immigrant workers, IRS also would incur costs to bring some sponsors into tax filing and payment compliance. As discussed earlier, bringing noncompliant business sponsors into compliance could displace other compliance activities that IRS would otherwise undertake. Depending on whether bringing the noncompliant business sponsors into compliance brought in more, the same, or less taxes than the displaced cases, bringing business sponsors into compliance could result in little change to IRS’s overall net tax collections, an increase, or possibly even a decrease. If data sharing were begun between USCIS and IRS and a user fee is charged, a number of implementation issues also would need to be considered. For example, would both agencies charge fees to cover their costs? Would one charge a fee sufficient to cover both agencies’ costs and, if so, how often would it forward collected monies to the other agency? In addition, if IRS were authorized to include costs for bringing noncompliant business sponsors into compliance, it likely would have to adjust the fees as it gained experience in determining how much cost it would incur to bring sponsors into compliance. Data sharing between IRS and USCIS has the potential to better guide IRS’s efforts to identify and correct noncompliance by taxpayers and result in more informed, accurate, and timely immigration eligibility decisions by USCIS adjudicators. Although the agencies would benefit in differing ways, establishing and implementing data sharing can be beneficial to both agencies. For tax compliance purposes, requiring a tax check— documenting whether a business sponsor has filed required returns and paid required taxes—likely would lead some businesses to come into compliance because they would know that USCIS would consider this information in determining whether the business can sponsor immigrant workers. However, because USCIS would only consider this information as one indication of whether businesses qualify to sponsor workers, a greater tax compliance increase is likely if businesses were required to meet tax filing and payment obligations as a condition for sponsoring workers. Given the billions of dollars of unpaid tax assessments among past business sponsors, our matching results illustrate strong potential for increased tax collections from changing immigration eligibility requirements. Although USCIS officials say no statutory provision prohibits USCIS from changing its regulations to require business sponsors to meet their tax filing and payment obligations, officials believe a statutory change would better withstand a legal challenge. Further, because collecting the unpaid assessments of business sponsors would displace other tax collections work, absent funding to cover its costs of bringing the business sponsors into compliance, IRS might not realize a net increase in overall tax collections. For immigration eligibility purposes, requiring business sponsors to meet their tax filing and payment obligations would help ensure businesses applying to sponsor immigrant workers are legitimate. Short of this change, either an applicant-initiated or an agency-initiated data-sharing arrangement could help improve benefit decisions. Our analysis shows strong potential to improve thousands of immigration eligibility decisions if USCIS uses IRS data to help support officials’ decisions about a business sponsor’s financial health. Even though data sharing shows promise to benefit both agencies, they would face implementation choices and challenges that could require technological or statutory solutions. In order to develop a better understanding of the implementation challenges and costs, to explore the most practical options for full scale implementation of data sharing, and to more completely assess benefits to IRS and USCIS from data sharing, USCIS should undertake a pilot test of data sharing under existing authority to use taxpayer consents to obtain tax data. Such a study would be consistent with congressional and executive branch policies that stress that sharing of tax data be thoroughly justified given concerns about possible adverse effects on tax compliance if the confidentiality of taxpayer’s data is compromised. Additionally, a study could assess issues raised by USCIS’s Ombudsman and immigration advocacy groups. A study would provide executive and legislative policymakers better information for determining the costs and benefits of data sharing and whether USCIS should require taxpayer consents from all future business sponsors or whether a change to IRC Section 6103 would better support efficient data sharing. To improve taxpayer compliance and USCIS’s immigration benefit decisions, Congress should consider (1) changing immigration eligibility to require businesses applying to sponsor immigrant workers to meet tax filing and payment obligations to sponsor immigrant workers and (2) authorizing a user fee to be collected and retained by IRS to cover the costs of bringing non-compliant taxpayers into compliance. To improve the accuracy and timeliness of USCIS’s immigration eligibility decisions absent requiring businesses to have met their tax filing and payment obligations, we recommend the Secretary of the Department of Homeland Security direct USCIS, in consultation with the IRS, to conduct a pilot data-sharing test. In the test, USCIS should require a tax check for selected businesses and other entities applying to sponsor immigrant workers before qualifying for immigration benefits. The pilot test should assess and document the costs and benefits of data sharing including key issues such as using paper or electronic consents, or pursuing specific IRC Section 6103 disclosure authority, assessing resource implications, and considering how the agencies would allocate responsibilities for collecting and allocating user fees from the business sponsors. The Commissioner of Internal Revenue provided written comments on a draft of this report in a September 16, 2005, letter. The Commissioner agreed that data sharing between the IRS and USCIS within the Department of Homeland Security may have many benefits. IRS agreed to conduct a small-scale pilot test, in conjunction with USCIS, to determine whether a business case exists for supporting data sharing before pursuing legislation or a large-scale taxpayer consent program. The Commissioner also stated an executive working group will determine the merits of applying user fees for compliance data sharing. On behalf of the Secretary of DHS, the Director of DHS’s Office of Inspector General Liaison provided written comments on a draft of this report in a September 26, 2005, letter. The Director generally agreed with our recommendation and acknowledged the pilot program would be consistent with USCIS’s desire to explore ways to streamline its processes and could provide necessary information with respect to considering the feasibility of initiatives such as data sharing on a larger scale. He appreciated GAO’s recognition of the burden of paper taxpayer consent forms place on USCIS and its reluctance to embark on a process that would be largely paper-based. Although the Director generally agreed with our recommendation, his agreement was contingent on the extent to which USCIS can lawfully engage in a pilot program as we recommend. The Director’s letter did not elaborate on his uncertainty regarding USCIS’s ability to lawfully engage in the recommended pilot program. Based on supplemental communication, his concern focused on USCIS’s authority to require business sponsors to consent to a tax check. According to the Director, immigration laws contain no explicit prohibition on conditioning employer petitions on their tax compliance and doing so might be legally defensible. Nevertheless, he said USCIS has serious legal concerns about USCIS’s authority to promulgate regulations with such a requirement. The Director did not describe the legal concerns and therefore we do not have a basis to evaluate them. Consulting with IRS in determining how to design the pilot test should help USCIS resolve these concerns. We note that Education requires taxpayer consents as a condition of participation in certain student loan repayment programs. Should USCIS ultimately conclude that it does not have authority to require such a waiver, an option for proceeding with the pilot test would be to ask selected business applicants to voluntarily allow USCIS to directly obtain tax data from IRS. Taxpayers may authorize others to obtain their tax data directly from IRS. The Director also expressed concerns about the impact of ongoing tax disputes on immigration benefit decisions if those decisions are contingent on a taxpayer being required to meet tax filing and payment obligations. We believe consultations between USCIS and IRS in designing a pilot program can address this issue. For instance, officials might decide that businesses with some minimal level of tax underpayment or businesses with tax delinquencies that are actively participating in a payment arrangement with IRS would not be disqualified from sponsoring immigrant workers. We would urge USCIS and IRS to develop data-sharing policies that would minimize the impact of ongoing tax disputes on immigration benefit decisions for business sponsors. In commenting on a user fee to be collected and retained by the IRS, the Director agreed that IRS should be provided with adequate resources to carry out its tax compliance mission but had serious concerns about the user fee proposal. The Director commented that policy considerations have kept USCIS from completely using its authority to recover its full costs. He noted that Congress has mandated several additional fees for certain employment-based applications and that, “in short, the more interagency functions the overall cost of an application to USCIS is expected to support, the higher the cost to the applicant without consequent improvements in USCIS services, the less likely it is that USCIS will be able to increase its fees as may be necessary to fully recover its own costs.” The Director also noted that we had previously reported that fee collections are not sufficient to pay USCIS’s full costs. We understand that many considerations must be taken into account in setting USCIS’s overall fees. However, as our report indicates, obtaining a benefit from IRS’s perspective depends substantially on having sufficient funds to bring business applicants that have outstanding tax filing or payment obligations into compliance. Further, as also indicated in our report, USCIS access to IRS tax data for determining immigration benefit decisions has the potential to improve service to USCIS’s business applicants because it could decrease rework and follow-up work with the applicant that currently occurs. This would help USCIS in minimizing processing time for all business sponsors. Ultimately, with more routine access to IRS data, USCIS might not need to request as much financial information from business applicants as it does now since USCIS officials themselves see IRS data as more reliable than information provided by applicants. Finally, the GAO report cited by the Director did conclude that USCIS fees were not sufficient to fully fund USCIS’s operations. The insufficiencies, however, were not due to fees being collected for interagency functions. Rather, we said this resulted because USCIS’s fee schedule was based on an outdated fee study that did not include all costs of USCIS’s operations and costs had increased since the fee study was conducted. Finally, the Director commented that the proposed user fee to cover IRS’s compliance related costs seemed to be different in concept from other existing user fees. As the Director noted, it was not within the scope of our review to examine all user fee relationships between IRS and other agencies. Based on the work we did, we are not aware of another case in which IRS receives a user fee to bring applicants for other agencies’ benefits into compliance with their tax obligations. However, Congress has authorized new or expanded funding arrangements to help IRS deal with its workload. For instance, in Treasury’s appropriations for 1995, Congress specifically authorized IRS to establish new user fees or raise existing fees for services provided in order to increase receipts. More recently, Congress also authorized IRS to use private collection agencies to assist in collecting delinquent taxes and specified that up to 25 percent of money collected can be used to pay the collection agencies and another 25 percent can be retained by IRS. Given the substantial unpaid taxes that we found among businesses applying to sponsor immigrant workers, we believe that it is appropriate for Congress to consider steps for effectively bringing these taxpayers into compliance without unduly deterring IRS from pursuing other noncompliant taxpayers. As our report explains, for this to occur, IRS’s costs for bringing the noncompliant business sponsors into compliance must be covered, otherwise IRS might experience a net decrease in tax collections. Consequently, our report put forth the user fee as one option for Congress to consider for supporting a potential data- sharing arrangement between IRS and USCIS. As agreed with your office, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Secretary of the Department of Homeland Security, the Director of the United States Citizenship and Immigration Services, and other interested parties. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to determine the (1) potential benefits of data matching and the (2) options for establishing and maintaining a data- sharing relationship between the Internal Revenue Service (IRS) and the U.S. Citizenship and Immigration Services (USCIS), including any challenges associated with those options. We performed our work at various IRS offices, including the Office of Governmental Liaison and Disclosure, the Office of Safeguards; and the Office of Program, Evaluation, and Risk Analysis. Our work included interviews with employees in IRS's Wage and Investment Operating Division and Small Business/Self Employed Operating Division. We interviewed USCIS officials in headquarters’ operational, technological, fraud, ombudsman, and policy offices. Additionally, we interviewed representatives of two immigration advocacy groups—the American Council on International Personnel and the American Immigration Lawyers Association—to obtain their perspectives on potential changes to immigration eligibility rules. To determine the potential benefits of data matching between IRS and USCIS, we summarized the benefits reported in our July testimony (GAO- 04-972T), including the results of our data matching efforts (see app. II). We worked with IRS on conducting additional research for business sponsors unknown to IRS (identified in our July 21, 2004 testimony) to determine whether they are operating businesses/organizations and have any tax compliance problems. To better illustrate the potential tax compliance benefit related to business sponsors who have unpaid tax assessments, we further stratified the business sponsors with unpaid assessments from our nationwide selection to identify subpopulations of business sponsors that were or were not in a payment arrangement or had made payments within 2 years. To determine the options for establishing and maintaining a data-sharing relationship between the IRS and USCIS, we interviewed IRS and USCIS officials on processes in place that support data sharing under existing disclosure authorities. We summarized operational information such as timeliness, costs, and volume levels for existing data-sharing relationships to provide perspective on the options for establishing a data-sharing relationship between IRS and USCIS. We interviewed IRS officials on the resource implications of sharing data via different data-sharing arrangements. We compiled examples of private institutions and state entities that use “tax checks”— IRS verification that a taxpayer filed and/or paid his or her taxes--for eligibility determination purposes and to summarize costs associated with “tax checks.” We interviewed IRS and USCIS officials and obtained and reviewed statutory and regulatory guidance on the use of user fees and summarized information on (1) types of user fees IRS has in place to support compliance and enforcement activities, (2) regulatory implications for employing a user fee to support data sharing between USCIS and IRS, and (3) whether user fees go to the general fund or the Treasury fund. Finally, we interviewed USCIS officials and reviewed documents on planned changes to immigration eligibility that may impact the type of IRS information immigration adjudicators will need for eligibility decisions. To determine the potential challenges of data matching between IRS and USCIS under the various data-sharing options, we primarily summarized the challenges reported in our July testimony, including the technological, cost, and legislative barriers. We identified and reviewed the legislative and regulatory authorities that govern disclosure of personal and financial information for eligibility determinations and tax compliance purposes. We interviewed USCIS policy and legal staff on the implications of changing immigration eligibility decisions to require applicants to (1) provide taxpayer consents that allow IRS to share data and (2) be current on their taxes and review related documentation. We also interviewed USCIS and IRS officials regarding future cost challenges associated with establishing a data-sharing relationship. We assessed the reliability of IRS's Business Master File (BMF) and Individual Master File (IMF) data and the USCIS's Computer Linked Application Information Management System, Version 3.0 (CLAIMS 3), a database containing nationwide data but not naturalization data, by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. As part of our annual audits of IRS’s financial statements, we also assess the reliability of IRS’s BMF and IMF data with respect to unpaid assessments by testing selected statistical samples of unpaid assessment modules. We determined that the data were sufficiently reliable for the purposes of this report. Our review was subject to some limitations. We relied on IRS officials to identify offices that use personal information because there is no central, coordinating point within IRS for receipt of this type of information. We relied on USCIS officials to identify immigration forms they believed would most benefit from data sharing with IRS, and we relied on IRS and USCIS officials’ views on possible impediments or missed opportunities to verify information, any additional data sharing and verification needs, and the benefits of increased disclosure of taxpayer information. Because our sample of 984 hard copy applications at selected USCIS field locations was not a probability sample, we cannot make inferences about the population of applications. In addition, because employer identification numbers/social security numbers were only available for 3.4 million of the 4.5 million applications in our nationwide selection of automated applications, our findings from these records are not representative of the entire population. We did not assess the reliability or quality of taxpayer information collected by IRS or the accuracy of information applicants reported to USCIS. Immigration applicants/taxpayers who were in IRS’s non-filer database could include individuals who did not meet IRS filing requirements. We relied on IRS’s investigation of the 33 business sponsors that were not in IRS’s databases since disclosure rules limit our contact with taxpayers. Since IRS searched its tax data for the last 5 years (1999– 2004) and we collected 7 years of immigration data (1997-2004), an unknown but likely small percentage of the businesses that submitted applications during 1997 and 1998, but are unknown to IRS, could be no longer in operation. Additionally, we did not assess the reliability of IRS data on the cost of paper taxpayer consents since we used this information for background purposes. We conducted our work from August 2004 through August 2005 in accordance with generally accepted government auditing standards. The tables that follow present the data we reported in our July 2004 testimony (GAO-04-972T) on the results of matching two sets of USCIS immigration data with IRS taxpayer data to determine the potential value for increased data sharing and matching. First, we used a nationwide selection of automated data on certain immigration applications: I-129 (Petition for a Nonimmigrant Worker), I-140 (Immigrant Petition for Alien Worker), and I-360 (Petition for Amerasian, Widow(er), or Special Immigrant) submitted from January 1, 1997, through March 5, 2004, to USCIS service centers for immigration benefits. We used only those applications in USCIS’s Computer Linked Application Information Management System, Version 3.0 (CLAIMS 3), a database containing nationwide data that contained an individual’s Social Security Number (SSN) or a business’s Employer Identification Number (EIN). For the matching process, 3.4 million out of 4.5 million records had usable SSNs or EINs. We obtained automated data for those years because USCIS’s automated system had historical data not readily available in hard copy files. We used these data to determine whether businesses and others that had applied to sponsor immigrant workers or immigrants applying to change their immigration status had filed a tax return with IRS and, if so, whether they owed taxes to IRS. Because the nationwide selection did not include any financial information, we could not use it to determine whether USCIS applicants reported the same income amounts to IRS and USCIS. Second, we visited five USCIS field locations and selected a nonprobability sample of 984 immigration files covering the period of 2001 through 2003 at four of the locations because they contained personal as well as financial information. These hard copy files were applications for citizenship, employment, and family-related immigration and change of immigration status applications. We used the hard copy immigration files to build an automated database of certain personal information, such as the individual’s SSN or business’s EIN and income reported to USCIS. We obtained hard copy files for those years because the USCIS offices we visited had immigration applications for those years on-site. Immigration offices send older files to storage. Since each district and service center organized and stored its applications in a different way and immigration officials could not always provide an updated count of applications by form number, we developed an approach to selecting applications that included pulling approximately every 50th file in immigration file rooms. We generally selected approximately 50-75 files at each field location for the following forms: I-129 (Petition for a Nonimmigrant Worker); I-140 (Immigrant Petition for Alien Worker); N-400 (Application for Naturalization); I-751 (Petition to Remove the Conditions on Residence); I- 360 (Petition for Amerasian, Widow(er), or Special Immigrant); and I-864 (Affidavit of Financial Support) that accompanies the I-485 (Application to Register Permanent Residence or to Adjust Status). We planned to select 50 files for Form I-829 (Petition by Entrepreneur to Remove Conditions) but only reviewed 12 files due to resource constraints and the voluminous nature of the application files. The matching results for our nonprobability sample included Form I-829s for a small number of individual immigrants who had unpaid assessments or were nonfilers and none for business or individual sponsors. To facilitate matching immigration and taxpayer data, we divided immigration applicants into three groups: business sponsors, individual sponsors, and individual beneficiaries. We matched the SSNs/EINs in our nationwide selection of immigration applications and our nonprobability sample of immigration applications with IRS's Business Master File (BMF) and Individual Master File (IMF) and other subsets such as the Revenue and Refunds Database. We identified immigration applicants/taxpayers that (1) matched with the IRS master files, (2) had unpaid assessments, (3) were nonfilers, (4) were businesses/organizations that had no record of tax activity in the last 5 years, and (5) did not match IRS master files. Additionally, to ensure we identified only business and organization sponsors whose EINs were unknown to IRS, we had IRS perform three additional matches using its BMF Taxpayer Identification Number Cross- Reference File, the BMF Entity File and the IMF Entity File. We used this sample to determine whether USCIS applicants reported the same income information to IRS as to USCIS and also as a second source of examples of USCIS applicants may who not have filed tax returns and may have owed taxes to IRS. Tables 1-3 show our results on business sponsors, individual sponsors, and individual beneficiaries that have unpaid assessments or are nonfilers for both our nationwide selection and nonprobability sample of immigration files. In addition to the contact named above, major contributors to this assignment were Signora J. May, Assistant Director; Jyoti Gupta, Tina Younger, Michele Fejfar, Shirley Jones, Amy Rosewarne, and James Ungvarsky who made key contributions to this report. | In 2000, federal agencies estimated they saved at least $900 million annually through data sharing initiatives. The Internal Revenue Service (IRS) can use data from taxpayers and third parties to better ensure taxpayers meet their obligations. Likewise, Congress has authorized certain agencies access to taxpayer information collected by IRS to better determine benefit eligibility. In July 2004, we reported that data sharing between IRS and the United States Citizenship and Immigration Services (USCIS) has the potential to improve tax compliance as well as immigration eligibility decisions (GAO-04-972T). For this report, GAO determined (1) the potential benefits of data matching, and (2) the options and associated challenges. Data sharing can help improve (1) tax compliance if businesses applying to sponsor immigrant workers are required to meet tax filing and payment requirements, and (2) the accuracy and timeliness of USCIS's immigration eligibility decisions if it obtained tax data from IRS to help ensure business sponsors meet eligibility criteria. As of December 2003, IRS databases showed 18,942 businesses (5 percent) applying to sponsor immigrant workers had $5.6 billion in unpaid assessments. Of this amount, businesses were not in installment agreements with IRS or otherwise making payments on $3.7 billion. If future business sponsors owe taxes and are required to meet their tax obligations, they would need to make arrangements with the IRS to come into compliance. Although USCIS officials acknowledge that no explicit prohibition exists in immigration laws against conditioning approval of employer applications on their tax compliance, USCIS officials said a statutory change is preferable because they have legal concerns about USCIS's authority to issue such a regulation absent specific authority. IRS data can help USCIS make more accurate eligibility decisions by better identifying businesses that may not have met eligibility criteria due to having unpaid assessments or not filing returns. In our nationwide selection, 67,949 of 413,723 (16 percent) business sponsors were in IRS's nonfiler database at the time of their application. A variety of options is available to IRS and USCIS for establishing and implementing data sharing. An applicant-initiated data-sharing arrangement could be implemented under existing Internal Revenue Code authority through taxpayer consent, whereby taxpayers authorize IRS to disclose their information. USCIS then could verify applicant-provided data by obtaining tax returns or tax transcripts. Treasury guidance suggests a small-scale pilot using consents as a way to make the business case for continued access to taxpayer information. In general, the more that data sharing could be done electronically, the more efficient the data sharing could be. However, achieving electronic data sharing may take longer than paper-based processes due to legal, technological, and cost challenges. Further, if business sponsors need to come into compliance, net tax collections might not increase if collecting their taxes displaces other IRS work. Establishing user fees to cover data-sharing costs could be a way to fund data sharing, but IRS lacks the authority to collect and retain a user fee to cover compliance-related costs associated with data sharing. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since the 1998 embassy bombings in East Africa, State has constructed more than 100 new diplomatic facilities and enhanced security measures at many others. Increased security at these facilities has raised concerns that would-be attackers may shift their focus to more easily accessible “soft targets”—places frequented by Americans and other Westerners, as well as their transportation routes. According to State, U.S. government employees and their families are most at risk on these transportation routes. Many of the worst attacks on U.S. diplomatic personnel—including 10 of the 19 attacks that prompted State to convene ARBs—occurred while victims were in transit. Among these was the 2004 murder of a U.S. diplomat in Iraq, which led the resulting February 2005 ARB to find that the diplomat’s death was almost certainly caused by his failure to follow the post’s security policy. The February 2005 Iraq ARB report consequently recommended several actions intended to increase post personnel’s compliance with security policy and personal security practices, which are discussed in more detail later in this report. Figure 1 shows the locations of the 10 transportation-related attacks that resulted in the formation of ARBs, and figure 2 depicts an August 2008 attack against the Principal Officer at the U.S. consulate general in Peshawar, Pakistan, shortly after she left her residence. According to State, transportation security overseas is a shared responsibility involving various entities. As established by the 1961 Vienna Convention on Diplomatic Relations, the host nation is responsible for providing protection to diplomatic personnel and missions. In addition, as required by the Omnibus Diplomatic Security and Antiterrorism Act of 1986, the Secretary of State, in consultation with the heads of other federal agencies, is responsible for developing and implementing policies and programs to protect U.S. government personnel on official duty abroad, along with their accompanying dependents. At posts abroad, chiefs of mission are responsible for the protection of personnel and accompanying family members at the mission. Further, as the February 2005 Iraq ARB noted, all mission personnel bear “personal responsibility” for their own and others’ security. Lead operational responsibility for transportation security overseas falls on DS, which is responsible for establishing and operating security and protective procedures at posts. For example, one division within DS manages State’s armored vehicle program, while another manages a contract used to provide transportation security in certain high- and critical-threat locations. DS also evaluates the security situation at each overseas post by assessing five types of threats—political violence, terrorism, crime, and two classified categories—and assigning corresponding threat levels for each threat type in the annually updated Security Environment Threat List. The threat levels are as follows: critical: grave impact on U.S. diplomats; high: serious impact on U.S. diplomats; medium: moderate impact on U.S. diplomats; and low: minor impact on U.S. diplomats. At posts, DS agents known as RSOs, including deputy RSOs and assistant RSOs, are responsible for protecting personnel and property. Among other things, RSOs are responsible for issuing transportation security and travel notification policies, providing security briefings to newly arrived personnel, and communicating information about threats to post personnel. According to State officials, RSOs at certain locations are also responsible for managing the post’s fleet of armored vehicles, while at other locations this responsibility is assumed by the general services office as part of its management of the post’s overall vehicle fleet. State’s policies are outlined in the FAM and corresponding FAH. Sections of the FAM and FAH pertinent to transportation security include various subchapters detailing, among other things elements that all security directives, including transportation-related policies, are required to include; State’s armored vehicle program; and personal security practices for employees to follow. See table 1 for further details on selected FAM and FAH policies that are relevant to transportation security. In addition to these policies, State has produced other guidance documents, such as a checklist that outlines criteria for DS reviewers to use when evaluating posts’ security policies and programs, including those related to transportation security, and cables that reiterate the recommendations of the February 2005 Iraq ARB and the importance of good personal security practices. State uses a variety of means to provide transportation security for U.S. personnel posted overseas. These include, but are not limited to, the following: Armored vehicles. In fiscal years 2011 through 2016, State obligated more than $310 million for armored vehicles, such as sport utility vehicles, vans, and sedans. Security contractors and bodyguards. Through the Worldwide Protective Services contract, DS has hired private security contractors to provide transportation security for diplomatic missions in certain high- and critical-threat areas. State obligated more than $2.7 billion for this contract in fiscal years 2011 through 2016. According to DS officials, DS also uses host nation police, local guard force personnel, U.S. government protective agents, or a combination of all three as bodyguards in more than 100 countries worldwide. State obligated more than $150 million for such bodyguards in fiscal years 2011 through 2016. State has established policies related to transportation security for overseas U.S. personnel, but gaps exist in guidance and monitoring. We reviewed 26 posts and found that each had issued a transportation security policy and a travel notification policy. However, the policies at 22 of the 26 posts were missing elements required by State, due in part to fragmented guidance on what such policies should include. State also lacks a clear armored vehicle policy for overseas posts and effective procedures for monitoring whether posts are assessing their armored vehicle needs at least annually, as required by State. These gaps limit State’s ability to ensure that posts develop policies that are clear and consistent with State requirements and that vehicle needs for secure transit are met. Federal internal control standards state that in order to enable personnel to perform key roles in achieving agency objectives and addressing risks, management should develop policies that outline personnel’s responsibilities in a complete, accurate, and clear manner. DS encourages every post to issue a transportation security policy and a travel notification policy. Because DS requires that these policies be issued as security directives, they are subject to criteria that apply to all security directives. Specifically, the February 2005 Iraq ARB recommended that all security directives include six elements intended to emphasize the personal security responsibilities of all personnel under chief-of-mission authority (see table 2). For example, the ARB report recommended that security directives, among other things, identify the consequences of violations and oblige all members of the mission to report any known or suspected violations. After accepting these recommendations, State promulgated them to all posts through multiple cables as well as the FAM. DS requires all security directives to include the six elements recommended by the February 2005 Iraq ARB. According to DS, transportation security and travel notification policies are required to include additional standard elements so that U.S. personnel and their families are aware of the potential transportation-related security risks they may face while at post. Specifically, transportation security policies are required by DS to clarify, among other things, if the use of public transit is permitted and whether any zones are off-limits to U.S. personnel, while travel notification policies are required to ensure that both official and personal travel are appropriately approved and conducted with appropriate vehicles, escort, and notification. Each of the 26 posts we reviewed had issued a transportation security policy and a travel notification policy. We found that 4 posts had issued policies that met all the required criteria while 22 posts had not. Specifically, 20 of 26 were missing one or more of the six elements required by DS in all security directives as recommended by the February 2005 Iraq ARB (see table 2), and 4 of 26 did not include all of the standard transportation-related elements required by DS (see table 3). Compliance with the standard transportation-related elements required was significantly higher (85 percent of the posts we reviewed) than compliance with the six elements required in all security directives as recommended by the February 2005 Iraq ARB (23 percent of the posts we reviewed). Two key factors contribute to these shortcomings in posts' transportation security and travel notification policies. First, no single source of guidance for RSOs on transportation security and travel notification policies lists all of the elements the policies at the posts are required to contain. Specifically, DS has produced multiple sources of guidance on what posts are to include in transportation security and travel notification policies, but each source covers a different set of requirements. For example, before a post security evaluation is conducted, RSOs are given a copy of the checklist used for the evaluation to help guide them in identifying and complying with security program requirements. While the checklist includes the standard transportation-related elements required by DS, such as the use of public transit and restricted zones, it does not list all of the elements required by DS in all security directives as recommended by the February 2005 Iraq ARB—a potential reason why the policies we reviewed were five times more likely to be missing the ARB elements than the standard transportation-related elements. By contrast, the FAM chapter on security directives contains the February 2005 Iraq ARB criteria but does not list any of the standard transportation-related elements required by DS. RSOs at 3 of the 9 posts we visited noted that it would be helpful if DS provided examples of model policies to use as guidance. DS is in the process of developing standard templates for certain security directives, including transportation security and travel notification policies, but this effort is not yet complete. Second, DS’s monitoring of post transportation policies lacks any additional guidance to ensure that reviewers assess policies consistently in order to identify any missing policy elements and suggest corrective action. Specifically, we found that DS reviewers lack a comprehensive set of criteria for evaluating posts’ transportation security and travel notification policies. DS officials told us that DS reviewers are expected to look for the February 2005 Iraq ARB criteria during their evaluations of post security directives, but as noted earlier the checklist that DS reviewers use does not mention the February 2005 Iraq ARB or list its criteria. Further, while DS officials stated that security directives lacking the February 2005 Iraq ARB criteria should receive lower scores, we found that several of the posts’ transportation policies that were lacking these required elements nevertheless received the highest possible score from DS reviewers. In January 2016, DS updated the checklist reviewers use to assess transportation security and travel notification policies by adding a reference to the FAM section that lists the February 2005 Iraq ARB criteria. Although citing the relevant FAM section is helpful, the checklist does not include all the actual requirements. Due to these weaknesses in its guidance and monitoring, State has no assurance that all of its posts are developing transportation-related policies that are comprehensive and consistent with department policy. As noted earlier, federal internal control standards state that agencies should develop policies that outline responsibilities in a complete, accurate, and clear manner to enable personnel to perform key roles in achieving agency objectives and addressing risks. The FAH establishes a minimum requirement for the number of armored vehicles at each post. The FAH also states that post Emergency Action Committees (EAC) must meet at least annually to discuss post armored vehicle programs and requirements. According to the FAM, it is important that EACs provide information on posts’ armored vehicle requirements to ensure there is sufficient time to budget for the costs of such vehicles, including the extra costs associated with armoring them. We found that DS may not be meeting the first of these FAH requirements, and EACs are not meeting the second requirement at every post. With respect to the first requirement, DS officials initially explained that under the FAH, every embassy and consulate is required to have a certain number of armored vehicles, but we found that not every consulate met this requirement as of May 2016. These potential deficiencies exist in part because DS has not instituted effective monitoring procedures to ensure that every embassy or consulate is in compliance with the FAH’s armored vehicle policy. Regarding the second requirement, DS officials in the armored vehicle program office told us that they do not receive annual assessments of post armored vehicle needs from all posts as required. This deficiency exists because DS lacks a mechanism for monitoring whether EACs meet at least annually to discuss their posts’ armored vehicle needs. Furthermore, DS officials in the armored vehicle program office stated that, unlike some other offices within DS and State’s regional bureaus, their office cannot compel post officials to hold EACs because overseas posts do not fall within their office’s chain of command. Without up-to-date information on posts’ armored vehicle needs, State cannot be certain that posts have the vehicles necessary to provide U.S. personnel and their families with secure transportation. DS officials told us that the lack of regular EAC assessment of the armored vehicle needs at each post also creates procurement challenges for armored vehicles aside from those used by chiefs of mission and principal officers, which the program office proactively replaces as needed. For all other armored vehicles, individual posts communicate their armored vehicle needs to the program office throughout the year as these needs arise. As a result, the program office orders vehicles in smaller numbers or waits until multiple orders come in, according to DS officials. This leads to avoidable delays because less urgent needs are not filled until the office receives enough requests to justify processing an order. Urgent vehicle needs are processed immediately, but DS officials told us this is inefficient because processing an order for one or two vehicles requires the same investment of time as a much larger request. While DS is taking steps to address the lack of annual EAC assessment of post armored vehicle needs, it is unclear if the planned steps will fully address the problem. In May 2016, DS released a cable to all posts to reiterate the annual requirement, and DS stated that it plans to work through RSOs at post to ensure that EACs meet at least annually. However, DS has not developed a mechanism to track whether the EACs actually do so. According to DS officials, the program office is also planning to develop a forecasting model to overcome some of the obstacles related to the lack of regular assessment, but the accuracy of this forecasting will ultimately depend on the timely submission of quality information from posts. State provides several types of training related to transportation security, but weaknesses exist in post-specific refresher training and State’s tracking of armored vehicle driver training. RSOs receive required training related to transportation security in special agent courses, and nonsecurity staff reported receiving relevant training before departing for posts and new arrival briefings at posts. Staff at most of the posts we visited either had difficulty remembering certain key details covered in the new arrival briefings or described the one-time briefings as inadequate. State lacks a clear requirement to provide periodic refresher briefings and for post personnel to participate in such briefings, potentially putting them at increased risk. Additionally, we found gaps or errors in State’s tracking of armored vehicle driver training; State is taking steps to address these problems. Federal internal control standards state that appropriate training, aimed at developing employee knowledge, skills, and abilities, is essential to an agency’s operational success. It is vital that U.S. diplomatic personnel— including RSOs as well as nonsecurity staff—receive training on the transportation-related security risks they may face overseas and how best to manage them in order to facilitate mission-related outcomes while protecting lives and property. As table 4 shows, State provides a number of training courses, targeting different audiences, that cover transportation security. For example, RSOs receive training in various tactics related to transportation security in the Basic Special Agent Course, which is required of all DS special agents, as well as in other training courses, such as the High Threat Operations Course (see fig. 3). With respect to non-RSO personnel, one or more participants in most of the focus groups we conducted mentioned that they had taken Foreign Affairs Counter Threat training. According to State, by 2019, all personnel posted overseas under chief-of-mission authority, with certain exceptions, will be required to take this training regardless of where they are posted. Foreign Affairs Counter Threat training covers several topics relevant to transportation security, such as defensive driving, route analysis, and the importance of taking personal responsibility for one’s security and varying routes and times to reduce one’s predictability. Figure 4 shows examples of transportation security-related elements of the Foreign Affairs Counter Threat training. Additionally, focus group participants stated that they had received new arrival briefings from the RSO upon their arrival at post. According to the FAM, new arrival briefings are to be comprehensive and are to acquaint newly arrived personnel with the post’s “total security environment,” including security requirements and procedures that are in effect, such as travel notification requirements. Furthermore, participants are required to affirm that they have received the briefing. However, participants in 10 of 13 focus groups either had difficulty recalling certain security policies and requirements or described the one-time briefings as inadequate. For example, some participants were unaware or unclear about specific aspects of their post transportation security policy or travel notification policy, while others said it can be challenging to remember the content of the new arrival briefings, in part because staff are simultaneously managing the process of moving and adjusting to a new post. Additionally, some participants suggested that the one-time nature of the briefings is not conducive to keeping staff informed of changes to security requirements and procedures, particularly in locations with fluid security environments. State lacks a clear requirement for RSOs to provide periodic refresher briefings and for post personnel to participate in such briefings. In part, this may result from the FAM’s lack of clarity and comprehensiveness on this matter. Specifically, the FAM states that RSOs must conduct refresher briefings “periodically” at “certain posts where personnel live under hostile intelligence or terrorist threats for long periods” but does not define “periodically” or “long periods.” Furthermore, according to DS officials, the FAM requirement does not extend to posts that face high levels of crime or political violence, even though both types of threats can pose risks to personnel in transit. Moreover, while there is a requirement for post personnel to affirm that they have received new arrival briefings, according to DS officials, there is no such requirement for affirming that they have received refresher briefings. RSOs at some of the posts we visited noted that they take steps to make updated briefings available to staff, such as electronically posting updated briefing slides and having regularly scheduled briefings open to all staff—not just new arrivals. However, RSOs at those posts stated that it was not mandatory for staff to view the updated slides or periodically attend the regularly offered briefings. DS headquarters officials commented that they believe most violations of post transportation security and travel notification policies are inadvertently committed by staff who have forgotten the information conveyed in new arrival briefings. Without effective reinforcement of the information that is covered in new arrival briefings, State cannot ensure that staff and their families have the knowledge they need to protect themselves from transportation-related security risks. According to the FAM, RSOs must ensure that locally employed staff assigned to drive armored vehicles for the chief of mission or principal officer attend the DS Training Center’s armored vehicle driver training course. This training covers topics such as emergency driving, attack recognition, and evasive maneuvers, among others. In addition, RSOs must ensure that these drivers take refresher training every 5 years following the initial training. The FAM also requires that State documentation be complete to the extent necessary to facilitate decision making, and federal internal control standards similarly state that managers should use quality information—information that is, among other things, current, complete, and accurate—to make informed decisions. We found two problems in State’s tracking of armored vehicle driver training, each of which State has either addressed or is taking steps to address. First, DS officials who manage the course were unaware of the existence of seven diplomatic and consular posts overseas and consequently lacked information on whether those posts had drivers in need of armored vehicle driver training. After we brought these seven posts to their attention, DS course managers consulted with other colleagues in DS and told us that they determined none of the seven posts had untrained drivers. Second, DS officials verified that some of State’s training records for armored vehicle drivers include inaccurate information about the posts to which the drivers are assigned. For example, DS course managers told us that, according to a database they use to track students of the training course, seven drivers from a particular post received training in fiscal years 2011 through 2015, but State’s official training records show no drivers from that post as having received training in that time period. A cognizant official stated that those seven drivers instead appear in the records as being assigned to a different post in the same country. When we asked about the cause of these inaccuracies, the official explained that they were due to clerical errors and stated that State will be taking steps to identify and correct similar errors in the future. State has a variety of systems for RSOs to communicate threat information to personnel and for personnel to report travel plans to RSOs. However, we found that several factors can inhibit the timely two-way communication of threat information and travel plans between RSOs and personnel. Timely communication is critical for managing transportation security risks, and failure to communicate important transportation-related information and receive such information promptly could leave overseas personnel facing avoidable security risks. According to DS officials, RSOs are responsible for communicating transportation-related threat information to post personnel. In addition, DS officials stated that various other officials may be involved in the process of communicating and receiving threat information at post, including consular officers, information management officers, and senior post officials, as well as post personnel themselves. For instance, according to DS officials, post personnel are responsible for making their mobile phone numbers available to RSOs so that they can receive text-based messages about potential threats, and they are generally also responsible for sharing threat information with their family members. RSOs at the nine posts we visited told us they communicated transportation-related threat information to post personnel through various methods, such as post-issued radios, personal and official e-mail, text messages to work and personal mobile phones, and phone trees. However, we learned of instances at four of the nine posts in which personnel did not receive important threat information in a timely manner. For instance, at one of the posts we visited, the RSO sent a security notice restricting travel along a specific road and warning that recent violent protests in the area had resulted in injuries and even death, but because the notice was sent exclusively to state.gov e-mail addresses, some non-State personnel at the post did not receive it at the e-mail address they regularly used and were unaware of the restriction. The personnel subsequently traveled through the restricted area, resulting in an embassy vehicle being attacked with rocks while on unauthorized travel through the area. While no one was hurt, the vehicle’s front windshield was smashed. The RSO told us that to avoid similar situations in the future, he would add the personnel’s regularly used e-mail addresses to his distribution list for security notices. At another post, focus group participants stated that they did not receive any information from the RSO or other post officials about the security-related closure of a U.S. consulate in the same country and instead learned about the closure from media sources. Participants in focus groups at two other posts stated that threat information is often either obsolete by the time they receive it or may not reach staff in time for them to avoid the potential threats. Several factors can lead to untimely receipt of transportation-related threat information. First, as in the example above, RSOs at three posts told us that they send security notices exclusively to post personnel’s state.gov e-mail addresses. However, officials who manage State’s e-mail system told us that some non-State personnel do not have state.gov e-mail addresses, and others who do may not check them regularly. Second, DS has produced limited guidance for RSOs on how to promote timely communication of threat information. By contrast, consular officers, who are responsible for sharing threat information with the nonofficial U.S. community at overseas posts, have detailed guidance from the Bureau of Consular Affairs on how to do so. Among other things, the guidance for consular officers encourages them to use previously cleared language whenever possible and also includes preapproved templates they can use for security-related messages. No such detailed guidance exists for RSOs, according to DS officials. Third, DS officials told us that staff, including RSOs, at some posts mistakenly believe that in cases where threat information applies to both official and nonofficial U.S. citizens and nationals, the RSO cannot share the threat information with the official U.S. community until consular officials have received approval to share the same information with the nonofficial U.S. community—a clearance process that can take as long as 8 hours. In April 2016, State completed an update to the FAM that, according to DS officials, is intended to clarify that RSOs’ sharing of threat information with the official U.S. community should not be delayed by this clearance process. However, because the update is found in a section of the FAM about consular affairs—not diplomatic security—it is unclear if RSOs will come across it in the course of their day-to-day duties. Acknowledging this potential challenge, DS officials told us that an update to the diplomatic security section of the FAM, which is currently under review, will include a reference to the relevant consular affairs section of the FAM. Federal internal control standards direct agencies to select appropriate methods of communication so that information is readily available to intended recipients when needed; thus, it is critical that post personnel receive timely information on emerging transportation security threats that enables them to take appropriate mitigation steps. Likewise, as noted earlier, RSOs are responsible for protecting personnel and property at posts—a responsibility that includes communicating transportation-related threat information to post personnel, according to DS officials. Without timely communication of transportation-related security risks and timely receipt of such information, post personnel may be less able to respond to changing security environments and comply with the latest post policies and directives, potentially putting them in harm’s way. All nine of the posts we visited had post-specific travel policies requiring personnel to notify the RSO—and in some cases to obtain approval— before traveling to certain locations. In addition to the travel notification requirements specific to these posts, the FAM contains broader travel notification requirements that apply to all personnel under chief of mission authority at overseas posts, and federal internal control standards emphasize the necessity of communication from personnel to management in order to achieve agency objectives. Travel notifications allow RSOs or other post officials to take actions to protect personnel, such as prohibiting potential travel to dangerous or restricted areas, providing appropriate security measures such as armored vehicles or additional security briefings, adjusting residential security activities while the occupant is away, and accounting for all post personnel in the event of an emergency. Personnel at more than half of the nine posts we visited cited difficulty using travel notification systems or were unaware or unsure of their post’s travel notification requirements. While three of the nine posts we visited permit personnel to use e-mail or other means to inform the RSO of their travel plans, the remaining six posts require personnel to complete an official travel notification form that is only accessible through a State information system called OpenNet. However, according to officials responsible for managing State’s information resources, including OpenNet, not all post personnel have OpenNet accounts. Specifically, all State personnel at overseas posts have OpenNet accounts, but some non-State agencies, such as the U.S. Agency for International Development, typically only have a limited number of OpenNet account holders at each post; some smaller agencies, such as the Peace Corps, usually have none. One focus group participant from a non-State agency told us that because she does not have an OpenNet account, her ability to submit travel notifications as required depends on whether or not she is able to find one of the few individuals at the post from her agency that does have an OpenNet account. Similarly, the travel notification policy for another post requires that post personnel use an OpenNet-based travel notification system even though the policy explicitly acknowledges that not all post personnel have OpenNet accounts. Focus group participants at several posts we visited also stated that they were unaware or unsure of their post’s travel notification requirements. For example, one post we visited requires notifications for all overnight travel, whether official or personal; however, at least one focus group participant at the post believed that such notifications were optional. At another post, focus group participants expressed confusion about which destinations within the host country required advance notification to the RSO. The RSO at that post described an incident in which post personnel traveling to a permitted location mistakenly violated post travel policy because their flight made a stop in a restricted area while en route to the nonrestricted destination. Travel notification requirements are covered in the security briefings personnel receive when they arrive at posts, but as discussed earlier participants in many of our focus groups had difficulty recalling certain key information covered in the new arrival briefings or found the one-time briefings to be inadequate. Advance notification of travel plans allows RSOs to act preemptively by assessing the current security situation at a certain location and determining whether to deny travel requests when conditions are particularly dangerous or provide personnel with relevant threat information or additional security resources—such as armored vehicles and armed security teams. Without notifications, RSOs may not be aware of travel plans and therefore may not take appropriate steps to protect post personnel. U.S. diplomatic personnel and their families face threats to their security in numerous locations around the world. As many serious and even fatal attacks over the last few decades have shown, personnel and their dependents are especially vulnerable when traveling outside the relative security of embassies, consulates, or residences. State has taken a number of measures to enhance transportation security for personnel overseas. For example, State provides security officials, post personnel, and their spouses and dependents with various types of training on how to avoid and counter transportation-related security risks. State also plans to expand its Foreign Affairs Counter Threat training to a much broader population over the next 3 years and has taken steps to emphasize that personnel should take responsibility for their own security. However, a variety of weaknesses in State’s implementation of its risk management activities continue to put U.S. personnel at risk. Fragmented guidance, insufficient monitoring of post-level transportation policies, and a lack of clarity in State’s armored vehicle policy for overseas posts make it difficult for State to ensure that measures necessary for protecting key personnel are implemented consistently worldwide, and State has limited insight into armored vehicle needs at some posts. In addition, State lacks an effective means of reinforcing the training personnel receive upon arrival at post, and the two-way sharing of threat and transportation-related information between post security officers and personnel is not always timely. While each of these shortcomings is of concern, in the aggregate, they raise questions about the adequacy of security for U.S. personnel and their families overseas. Until it addresses these issues, State cannot be assured that the deadly threats U.S. personnel and their families may face while in transit overseas are being countered as effectively as possible. To enhance State’s efforts to manage transportation-related security risks overseas, we recommend that the Secretary of State direct DS to take the following eight actions: 1. Create consolidated guidance for RSOs that specifies required elements to include in post travel notification and transportation security policies. For example, as part of its current effort to develop standard templates for certain security directives, DS could develop templates for transportation security and travel notification policies that specify the elements required in all security directives as recommended by the February 2005 Iraq ARB as well as the standard transportation-related elements that DS requires in such policies. 2. Create more comprehensive guidance for DS reviewers to use when evaluating posts’ transportation security and travel notification policies. For example, the checklist DS reviewers currently use could be modified to stipulate that reviewers should check all security directives for DS-required elements recommended by the February 2005 Iraq ARB. The checklist could also provide guidance on how to take the presence or absence of these required elements into account when assigning a score to a given policy. 3. Clarify whether or not the FAH’s armored vehicle policy for overseas posts is that every post must have sufficient armored vehicles, and if DS determines that the policy does not apply to all posts, articulate the conditions under which it does not apply. 4. Develop monitoring procedures to ensure that all posts comply with the FAH’s armored vehicle policy for overseas posts once the policy is clarified. 5. Implement a mechanism, in coordination with other relevant State offices, to ensure that EACs discuss their posts’ armored vehicle needs at least once each year. 6. Clarify existing guidance on refresher training, such as by delineating how often refresher training should be provided at posts facing different types and levels of threats, which personnel should receive refresher training, and how the completion of refresher training should be documented. 7. Improve guidance for RSOs, in coordination with other relevant State offices and non-State agencies as appropriate, on how to promote timely communication of threat information to post personnel and timely receipt of such information by post personnel. 8. Take steps, in coordination with other relevant State offices and non- State agencies as appropriate, to make travel notification systems easily accessible to post personnel who are required to submit such notifications, including both State and non-State personnel. We provided a draft of this report for review and comment to State, the U.S. Agency for International Development, and the Peace Corps. We received written comments from State, which are reprinted in appendix II. State generally concurred with 7 of our 8 recommendations and highlighted a number of actions it is taking or plans to take to address the problems that we identified. State did not concur with our sixth recommendation to clarify guidance on refresher training. In its response, State described a number of efforts that RSOs take to keep post personnel informed, such as sending security messages via e-mails and text messages, and therefore State did not believe additional formal training was necessary. We agree that RSOs have made significant efforts to keep post personnel informed. Nevertheless, participants in 10 of our 13 focus groups either had difficulty recalling certain security policies and requirements or described their security briefings as inadequate. Participants noted that this was, in part, because it can be challenging to remember the content of new arrival security briefings while they are simultaneously managing the process of moving and adjusting to a new post and because of the one-time nature of new arrival briefings. DS headquarters officials stated that most violations of post travel policies are due to personnel forgetting the information conveyed in the new arrival briefings. By clarifying existing guidance that requires refresher briefings, and then providing those briefings, State could potentially remind and update personnel about post security policies and requirements in a more effective setting and on a more regular basis. In addition, RSOs at posts we visited provided security briefings to new arrivals on a regular basis. Thus, allowing staff already at post to attend these regular briefings could involve minimal additional cost or effort. The U.S. Agency for International Development and the Peace Corps did not provide comments on the report. State also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, the Administrator of the U.S. Agency for International Development, and the Director of the Peace Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this report were to evaluate the extent to which the Department of State (State), with regard to transportation security, has (1) established policies, guidance, and monitoring; (2) provided personnel with training; and (3) communicated time-sensitive information. For the purposes of this review, we focused on transportation security for U.S. personnel at overseas posts, which we defined as security for such personnel while they are in transit outside of embassy and consulate compounds or their residences at overseas posts. Our scope did not include transportation-related safety threats, such as road conditions or local drivers. We focused primarily on transportation in motor vehicles, but also included travel on foot, public transit, and to the extent that post documents and personnel addressed them, boats and local airlines. We did not focus on transportation-related security issues specific to Iraq and Afghanistan given the unique operating environments in those countries. We focused on U.S. direct hire personnel permanently assigned or on temporary duty under chief-of-mission security responsibility and their family members but excluded locally employed staff. To address the objectives of this report, we reviewed U.S. laws; State’s security policies and procedures as found in the Foreign Affairs Manual, Foreign Affairs Handbooks, and diplomatic cables; Bureau of Diplomatic Security (DS) threat and risk ratings and periodic assessments of post security programs; State budgetary documents and training curricula; classified Accountability Review Board reports concerning transportation- related attacks; and past reports by GAO, State’s Office of Inspector General, and the Congressional Research Service. We assessed DS’s risk management practices against its own policies and federal internal control standards. In addition, we interviewed officials in Washington, D.C., from DS; State’s Bureaus of Administration, Consular Affairs, and Information Resource Management; State’s Offices of Inspector General and Management Policy, Rightsizing, and Innovation; State regional bureaus; the U.S. Agency for International Development; and the Peace Corps. We also attended State’s Foreign Affairs Counter Threat training course to gather firsthand information on the extent to which it covers issues related to transportation security. Additionally, we selected a judgmental sample of 26 posts for which we collected post-level transportation security and travel notification policies, among other documents. For security reasons, we are not naming the specific posts. Our judgmental sample included three to five embassies or consulates from each of State’s six geographic regions. In addition to ensuring geographic coverage, we selected 22 posts that had relatively high DS-established threat ratings, while also choosing 4 posts with lower threat ratings for comparison purposes. We evaluated the extent to which these 26 posts’ policies contained key elements required by DS, including the criteria recommended by the February 2005 Iraq ARB. We then reviewed various related documents, such as the checklist DS uses in its periodic assessments of post security programs, and spoke with cognizant DS officials to identify factors contributing to cases of noncompliance with the required elements. The findings from our judgmental sample of 26 posts are not generalizable to all posts. We also conducted fieldwork at 9 of these 26 posts. Each of the 9 posts was rated by DS as having a high or critical threat level in one or more of the Security Environment Threat List categories of political violence, terrorism, and crime. Additionally, 8 of the 9 posts we selected for fieldwork were within the top 100 posts rated by DS as the highest risk worldwide, 5 were in the top 75, and 3 were in the top 50. At the 9 posts, we met with officials from State and other agencies involved in transportation security—including regional security officers (RSO), general services officers, community liaison officers, Emergency Action Committee members, and other senior post officials—to understand their roles related to transportation security and their perspectives on State’s associated policies and procedures. In addition, to obtain a wide range of firsthand perspectives from personnel at these 9 posts, we conducted 13 focus group discussions with randomly selected U.S. direct hire personnel who had been at post longer than 3 months. We selected focus group participants from multiple agencies at each post and various sections within State. We excluded RSOs and senior post officials from our focus groups in order to encourage participants to provide candid observations on security-related matters. These meetings involved structured, small-group discussions designed to gain more in-depth information about specific issues that cannot easily be obtained from single or serial interviews. Most groups involved 6 to 10 participants. Discussions were structured and guided by a moderator who used the following standardized questions to encourage participants to share their thoughts and experiences. 1. In your opinion, what are the most significant security threats that staff face when traveling in-country at this post? As a reminder, we are interested in security threats posed by other people with intent to harm, not safety threats such as road conditions or local drivers. 2. What guidance or training have you received—whether before or after arriving at this post—on security practices to protect yourself against potential threats or attacks when traveling in-country? This is different from the post’s travel or transportation security policy, which we will ask you about later in the discussion. a. Where did you receive that guidance or training? What were the key takeaways from that guidance or training? 3. How easy or difficult is it to routinely apply those security practices at this post? 4. In your opinion, is post’s travel or transportation security policy appropriately tailored to the types and levels of security threats that staff face when traveling in-country at this post? a. If yes, in what ways is the travel policy appropriately tailored? b. If not, how can the policy be improved? c. What factors, if any, create challenges to following the post’s travel policy? 5. In your opinion, have you received all the guidance or training you need to protect yourself against potential threats or attacks when traveling in-country at this post? If not, what additional guidance or training do you believe is needed? 6. What other suggestions do you have, if any, for how staff posted at diplomatic posts overseas can be better protected against potential security threats or attacks when traveling in-country? As the list indicates, we did not ask about specific security threats, guidance, training, or security practices, but instead asked general questions on each of these topics. For example, we did not specifically ask participants whether they had received certain types of training; rather, we asked a general question about what training they had received and relied on them to volunteer information on the types of training they had taken. However, when appropriate, we did ask more specific follow-up questions during the focus groups. Our overall objective in using a focus group approach was to obtain the views, insights, and beliefs of overseas personnel on issues related to transportation security. While we recorded the audio of each focus group, we assured participants of the anonymity of their responses, promising that their names would not be directly linked to their responses. We also conducted one pretest focus group, after which we asked the participants of the pretest focus group to provide their opinions on whether the questions we asked were comprehensive, clear, unbiased, and appropriate. The participants of the pretest focus group confirmed that our questions were comprehensive, clear, unbiased, and appropriate. To analyze the focus group responses, we reviewed transcripts of the focus group audio recordings and conducted keyword searches to identify key themes related to our reportable objectives. We quantified the frequency of these key themes by counting the number of focus groups (out of 13) in which the themes were raised. As appropriate, we also followed up with RSOs and other officials at the posts we visited to discuss and clarify the issues raised, while preserving the anonymity of the focus group participants. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, they are intended to generate in-depth information about the reasons for the focus group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. The generalizability of the information produced by our focus groups is limited because participants were asked questions about their specific experiences related to transportation security, and other personnel who did not participate in our focus groups or were located at different posts may have had different experiences. Because of these limitations, we did not rely entirely on focus groups but rather used several different methodologies to corroborate and support our conclusions. For example, as noted earlier in this appendix, we reviewed a variety of documents and interviewed cognizant officials from multiple agencies and offices. To determine the reliability of the data on funding for transportation security and training records for armored vehicle drivers that we collected, we compared information from multiple sources, checked the data for reasonableness, and interviewed knowledgeable officials regarding the processes they use to collect and track the data. On the basis of these checks, we found the data we collected on funding for transportation security to be sufficiently reliable for the purposes of this engagement. As noted in this report, we found inaccuracies in the training records for armored vehicle drivers. We interviewed cognizant DS officials to understand the causes of the inaccuracies as well as State’s plans to address them. We also collected data on the worldwide distribution of armored vehicles by post. This report is a public version of a sensitive but unclassified report that was issued on September 9, 2016, copies of which are available upon request for official use only by those with the appropriate need to know. This report does not contain certain information that State regarded as sensitive but unclassified and requested that we remove. We provided State a draft copy of this report for sensitivity review, and State agreed that we had appropriately removed all sensitive but unclassified information. We conducted this performance audit from July 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Thomas Costa (Assistant Director), Joshua Akery, Aniruddha Dasgupta, David Dayton, Martin De Alteriis, Neil Doherty, Justin Fisher, Lina Khan, Jill Lacey, Grace Lui, and Oziel Trevino made key contributions to this report. Diplomatic Security: State Should Enhance Management of Transportation-Related Risks to Overseas U.S. Personnel. GAO-16- 615SU. Washington, D.C.: September 9, 2016. Diplomatic Security: Options for Locating a Consolidated Training Facility. GAO-15-808R. Washington, D.C.: September 9, 2015. Diplomatic Security: State Department Should Better Manage Risks to Residences and Other Soft Targets Overseas. GAO-15-700. Washington, D.C.: July 9, 2015. Diplomatic Security: State Department Should Better Manage Risks to Residences and Other Soft Targets Overseas. GAO-15-512SU. Washington, D.C.: June 18, 2015. Combating Terrorism: Steps Taken to Mitigate Threats to Locally Hired Staff, but State Department Could Improve Reporting on Terrorist Threats. GAO-15-458SU. Washington, D.C.: June 17, 2015. Diplomatic Security: Overseas Facilities May Face Greater Risks Due to Gaps in Security-Related Activities, Standards, and Policies. GAO-14-655. Washington, D.C.: June 25, 2014. Diplomatic Security: Overseas Facilities May Face Greater Risks Due to Gaps in Security-Related Activities, Standards, and Policies. GAO-14-380SU. Washington, D.C.: June 5, 2014. Countering Overseas Threats: Gaps in State Department Management of Security Training May Increase Risk to U.S. Personnel. GAO-14-360. Washington, D.C.: March 10, 2014. State Department: Diplomatic Security Challenges. GAO-13-191T. Washington, D.C.: November 15, 2012. Diplomatic Security: Expanded Missions and Inadequate Facilities Pose Critical Challenges to Training Efforts. GAO-11-460. Washington, D.C.: June 1, 2011. Overseas Security: State Department Has Not Fully Implemented Key Measures to Protect U.S. Officials from Terrorist Attacks Outside of Embassies. GAO-05-642. Washington, D.C.: May 9, 2005. Overseas Security: State Department Has Not Fully Implemented Key Measures to Protect U.S. Officials from Terrorist Attacks Outside of Embassies. GAO-05-386SU. Washington, D.C.: May 9, 2005. | U.S. diplomatic personnel posted overseas continue to face threats to their security. According to State, personnel and their families are particularly vulnerable when traveling outside the relative security of diplomatic work facilities or residences. In many serious or fatal attacks on U.S. personnel over the last three decades, victims were targeted while in motorcades, official vehicles, or otherwise in transit. GAO was asked to review how State manages transportation-related security risks to U.S. diplomatic personnel overseas. For this report, GAO evaluated the extent to which State, with regard to transportation security at overseas posts, has (1) established policies, guidance, and monitoring; (2) provided personnel with training; and (3) communicated time-sensitive information. GAO reviewed agency documents and met with key officials in Washington, D.C. GAO also reviewed policies from a judgmental sample of 26 posts—primarily higher-threat, higher-risk locations—and conducted fieldwork and met with officials at 9 of these posts. This is the public version of a sensitive but unclassified report issued in September 2016. The Department of State (State) has established policies related to transportation security for overseas U.S. personnel, but gaps exist in guidance and monitoring. GAO reviewed 26 posts and found that all 26 had issued transportation security and travel notification policies. However, policies at 22 of the 26 posts lacked elements required by State, due in part to fragmented implementation guidance on what such policies should include. State also lacks a clear armored vehicle policy for overseas posts and procedures for monitoring if posts are assessing their armored vehicle needs at least annually as required by State. These gaps limit State's ability to ensure that posts develop clear policies that are consistent with State's requirements and that vehicle needs for secure transit are met. While State provides several types of training related to overseas transportation security, weaknesses exist in post-specific refresher training. Regional security officers (RSO) receive required training related to transportation security in special agent courses, and nonsecurity staff reported receiving relevant training before departing for posts—including on topics such as defensive driving and the importance of taking personal responsibility for one's security—as well as new arrival briefings at posts. At most of the 9 posts GAO visited, however, staff had difficulty remembering key details covered in new arrival briefings or described the one-time briefings as inadequate. State's requirements for providing refresher briefings are unclear, potentially putting staff at greater risk. State uses various systems at overseas posts to communicate time-sensitive information related to transportation security, but several factors hinder its efforts. RSOs and other post officials are responsible for communicating threat information to post personnel. However, at 4 of the 9 posts it visited, GAO learned of instances in which staff did not receive important threat information in a timely manner for various reasons. In one case, this resulted in an embassy vehicle being attacked with rocks and seriously damaged while traveling through a prohibited area. In addition, while all 9 of the posts GAO visited require that personnel notify the RSO before traveling to certain locations, personnel at more than half of the 9 posts said they were unaware of these requirements or had difficulty accessing required travel notification systems. Timely communication is critical for managing transportation security risks, and failure to communicate important transportation-related information and receive such information promptly could leave overseas personnel facing avoidable security risks. GAO is making eight recommendations in this report to help State improve its management of transportation-related security risks by enhancing associated policies, guidance, and monitoring; clarifying its requirements for refresher briefings; and better communicating time-sensitive information. State agreed to take steps for all but one recommendation—the need to clarify its requirements for refresher briefings. GAO continues to believe this is needed as discussed in the report. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Systems engineering and test and evaluation are critical parts of the weapon system acquisition process and how well these activities are conducted early in the acquisition cycle can greatly affect program outcomes. Systems engineering translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified through requirements analysis, design, and testing. Early systems engineering provides the knowledge that weapon system requirements are achievable with available resources such as technologies, time, people, and money. It allows a product developer to identify and resolve performance and resource gaps before product development begins by reducing requirements, deferring them to the future, or increasing the estimated cost for the weapon system’s development. Systems engineering plays a fundamental role in the establishment of the business case for a weapon acquisition program by providing information to DOD officials to make tradeoffs between requirements and resources. Systems engineering is then applied throughout the acquisition process to manage the engineering and technical risk in designing, developing, and producing a weapon system. The systems engineering processes should be applied prior to the start of a new weapon acquisition program and then continuously throughout the life-cycle. Test and evaluation provides information about the capabilities of a weapon system and can assist in managing program risk. There are generally two broad categories of testing: developmental and operational. Developmental testing is used to verify the status of technical progress, substantiate achievement of contract technical performance, and certify readiness for initial operational testing. Early developmental testing reduces program risks by evaluating performance at progressively higher component and subsystem levels, thus allowing program officials to identify problems early in the acquisition process. Developmental testing officials in the Office of the Secretary of Defense and the military services provide guidance and assistance to program managers on how to develop sound test plans. The amount of developmental testing actually conducted however, is controlled by the program manager and the testing requirements explicitly specified in the development contract. In contrast, operational testing determines if a weapon system provides operationally useful capability to the warfighter. It involves field testing a weapon system, under realistic conditions, to determine the effectiveness and suitability of the weapon for use in combat by military users, and the evaluation of the results of such tests. DOD’s Director of Operational Test and Evaluation conducts independent assessments of programs and reports the results to the Secretary of Defense and Congress. In 2008, the Defense Science Board reported that operational testing over the previous 10 years showed that there had been a dramatic increase in the number of weapon systems that did not meet their suitability requirements. The board found that failure rates were caused by several factors, notably the lack of a disciplined systems engineering process early in development and a robust reliability growth program. The board also found that weaknesses in developmental testing, acquisition workforce reductions and retirements, limited government oversight, increased complexity of emerging weapon systems, and increased reliance on commercial standards (in lieu of military specifications and standards) all contributed to these failure rates. For example, over the last 15 years, all service acquisition and test organizations experienced significant personnel cuts, including the loss of a large number of the most experienced technical and management personnel, including subject matter experts, without an adequate replacement pipeline. The services now rely heavily on contractors to help support these activities. Over the past two decades, the prominence of the developmental testing and systems engineering communities within the Office of the Secretary of Defense has continuously evolved, as the following examples illustrate. In 1992, a systems engineering directorate did not exist and the developmental test function was part of the Office of the Director of Test and Evaluation, which reported directly to the Under Secretary of Defense for Acquisition. At that time, the director had direct access to the Under Secretary on an array of issues related to test policy, test assets, and the workforce. In 1994, the Development Test, Systems Engineering and Evaluation office was formed. This organization effectively expanded the responsibilities of the former testing organization to formally include systems engineering. The organization had two deputy directors: the Deputy Director, Development Test and Evaluation, and the Deputy Director, Systems Engineering. This organization was dissolved in 1999. From 1999 to 2006, systems engineering and developmental testing responsibilities were aligned under a variety of offices. The responsibility for managing test ranges and resources, for example, was transferred to the Director of Operational Test and Evaluation. This function was later moved to the Test Resource Management Center, which reports directly to AT&L, where it remains today. In 2004, a Director of Systems Engineering was re-established and then in 2006 this became the System and Software Engineering Directorate. Developmental testing activities were part of this directorate’s responsibilities. As a result, systems engineering and developmental testing issues were reported indirectly to AT&L through the Deputy Under Secretary for Acquisition and Technology. Congress passed the Weapon Systems Acquisition Reform Act of 2009 (Reform Act)—the latest in a series of congressional actions taken to strengthen the defense acquisition system. The Reform Act establishes a Director of Systems Engineering and a Director of Developmental Test and Evaluation within the Office of the Secretary of Defense and defines the responsibilities of both offices. The Reform Act requires the services to develop, implement, and report on their plans for ensuring that systems engineering and developmental testing functions are adequately staffed to meet the Reform Act requirements. In addition, it requires the directors to report to Congress on March 31 of each year on military service and major defense acquisition program systems engineering and developmental testing activities from the previous year. For example, the report is to include a discussion of the extent to which major defense acquisition programs are fulfilling the objectives of their systems engineering and developmental test and evaluation master plans, as well as provide an assessment of the department’s organization and capabilities to perform these activities. Figure 1 shows some of the major reorganizations over the past two decades, including the most recent change where DOD decided to place the two new directors’ offices under the Director of Defense Research and Engineering. DOD has made progress in implementing the systems engineering and developmental test and evaluation provisions of the Reform Act, but has not yet developed performance criteria that would help assess the effectiveness of the changes. Some requirements, such as the establishment of the two new offices, have been fully implemented. The implementation of other requirements, such as the review and approval of systems engineering and developmental test and evaluation plans, has begun but requires sustained efforts. The department has not fully implemented other requirements. For example, DOD has begun development of joint guidance that will identify measurable performance criteria to be included in the systems engineering and developmental testing plans. DOD initially decided that one discretionary provision of the act—naming the Director of Developmental Test and Evaluation also as the Director of the Test Resource Management Center—would not be implemented. However, the Director of Defense Research and Engineering is currently examining the implications of this organizational change. It will be several years before the full impact of the Reform Act provisions is known. The offices of the Director of Systems Engineering and Developmental Test and Evaluation were officially established by the Under Secretary of Defense for AT&L in June 2009 to be his principal advisors on systems engineering and developmental testing matters. The directors took office 3 months and 9 months later, respectively, and are working on obtaining the funding, workforce, and office space needed to accomplish their responsibilities. The directors have also completed evaluations of the military services’ organizations and capabilities for conducting systems engineering and developmental testing, and identified areas for improvement. These evaluations were based on reports provided by the services that were also required by the Reform Act. As shown in table 1, many of the requirements that have been implemented will require ongoing efforts. The directors have the responsibility for reviewing and approving systems engineering and developmental test and evaluation plans as well as the ongoing responsibility to monitor the systems engineering and developmental test and evaluation activities of major defense acquisition programs. During fiscal year 2009, the Director of Systems Engineering reviewed 22 systems engineering plans and approved 16, while the Director of Developmental Test and Evaluation reviewed and approved 25 developmental test and evaluation plans within the test and evaluation master plans. Both offices are monitoring and reviewing activities on a number of major acquisition programs, including the Virginia Class Submarine, the Stryker Family of Vehicles, and the C-130 Avionics Modernization Program. Once their offices are fully staffed, the directors plan to increase efforts in reviewing and approving applicable planning documents and monitoring the activities of about 200 major defense acquisition and information system programs. Evaluations of 42 weapon systems were included in the directors’ first annual joint report to Congress. The individual systems engineering program assessments were consistent in that they typically included information on 10 areas, including requirements, critical technologies, technical risks, reliability, integration, and manufacturing. In some cases, the assessments also included an overall evaluation of whether the program was low, medium, or high risk; the reasons why; and a general discussion of recommendations or efforts the director has made to help program officials reduce any identified risk. Examples include the following. In an operational test readiness assessment of the EA-18G aircraft, the Director of Systems Engineering found multiple moderate-level risks related to software, communications, and mission planning and made recommendations to reduce the risks. The program acted on the risks and recommendations identified in the assessment and delayed the start of initial operational testing by 6 weeks to implement the fixes. It has completed initial operational testing and was found to be effective and suitable by Navy testers. The Director of Operational Test and Evaluation rated the system effective but not suitable, and stated that follow-on testing has been scheduled to verify correction of noted deficiencies. The program received approval to enter full rate production and is rated as a low risk in the joint annual report. The systems engineering assessment of the Global Hawk program was high risk pending the determination of actual system capability; it also stated that there is a high probability that the system will fail operational testing. The assessment cited numerous issues, including questions regarding the system’s ability to meet mission reliability requirements, poor system availability, and the impact of simultaneous weapon system block builds (concurrency). Despite the director’s concerns and efforts to help the program office develop a reliability growth plan for Global Hawk, no program funding has been allocated to support reliability improvements. The Expeditionary Fighting Vehicle assessment did not include an overall evaluation of risk. The assessment noted that the program was on track to meet the reliability key performance parameter of 43.5 hours mean time between operational mission failure. Problems related to meeting this and other reliability requirements were a primary reason why the program was restructured in 2007. However, the assessment did not address the high degree of concurrency between development and production, which will result in a commitment to fund 96 low-rate initial procurement vehicles prior to demonstrating that the vehicle can meet the reliability threshold value at initial operational test and evaluation, currently scheduled for completion by September 2016. Developmental testing assessments covered fewer programs and were not as structured as those provided by the systems engineering office in that there were no standard categories of information that were included in each assessment. Part of the reason is that the Director of the Developmental Test and Evaluation office was just developing the necessary expertise to review and provide formal assessments of programs. For the programs that were reviewed, the assessments included a status of developmental testing activities on programs and in some cases an assessment of whether the program was low, medium, or high risk. For example, the Director of Developmental Test and Evaluation supported an assessment of operational test readiness for the C-5 Reliability Enhancement and Reengining Program. The assessment stated that due to incomplete testing and technical issues found in developmental testing, there is a high risk of failure in operational testing. The assessment recommended that the program resolve these issues before beginning operational testing. The Reform Act also requires that the Director of Systems Engineering develop policies and guidance on, among other things, the use of systems engineering principles and best practices and the Director of Developmental Test and Evaluation develop policies and guidance on, among other things, the conduct of developmental testing within DOD. The directors have issued some additional policies to date, such as expanded guidance on addressing reliability and availability on weapon programs and on incorporating test requirements in acquisition contracts. The directors plan to update current guidance and issue additional guidance in the future. According to DOD officials, there are over 25 existing documents that provide policy and guidance for systems engineering and developmental testing. The directors also have an ongoing responsibility to advocate for and support their respective DOD acquisition workforce career fields, and have begun examining the training and education needs of these workforces. Two provisions, one of which is discretionary, have not been completed. The Reform Act requires that the directors, in coordination with the newly established office of the Director for Program Assessments and Root Cause Analysis, issue joint guidance on the development of detailed, measurable performance criteria that major acquisition programs should include in their systems engineering and testing plans. The performance criteria would be used to track and measure the achievement of specific performance objectives for these programs, giving decision makers a clearer understanding each program’s performance and progress. The offices have begun efforts to develop these policies and guidance, but specific completion dates have not been identified. At this time, it is unclear whether the guidance will include specific performance criteria that should be consistently tracked on programs and any risks associated with these programs, such as ones related to technology maturity, design stability, manufacturing readiness, concurrency of development and production activities, prototyping, and adequacy of program resources. Finally, the Reform Act gives DOD the option of permitting the Director of Developmental Test and Evaluation to serve as the Director of the Test Resource Management Center. DOD initially decided not to exercise this option. However, the Director of Defense Research and Engineering recently stated that his organization is examining the possibility of consolidating the offices. The director stated that it makes sense to combine the two offices because it would merge test oversight and test resource responsibilities under one organization, but the ultimate decision will be based on whether there are any legal obstacles to combining the two offices. While most of the Reform Act’s requirements focus on activities within the Office of the Secretary of Defense, the military services are ultimately responsible for ensuring that their weapon systems start off with strong foundations. To that end, in November 2009, the services, in reports to the Directors of Systems Engineering and Developmental Test and Evaluation, identified plans for ensuring that appropriate resources are available for conducting systems engineering and developmental testing activities. The individual reports also highlighted management initiatives undertaken to strengthen early weapon acquisition activities. For example, the Army is establishing a center at Aberdeen Proving Ground that will focus on improving reliability growth guidance, standards, methods, and training for Army acquisition programs. The Navy has developed criteria, including major milestone reviews and other gate reviews, to assess the “health” of testing and evaluation at various points in the acquisition process. The Air Force has undertaken an initiative to strengthen requirements setting, systems engineering, and developmental testing activities prior to the start of a new acquisition program. Air Force officials believe this particular initiative will meet the development planning requirements of the Reform Act. Experts provided different viewpoints on the proper placement of the new systems engineering and developmental test and evaluation offices, with some expressing concern that as currently placed, the offices will wield little more power or influence than they had prior to the passage of the Reform Act. According to the Director of Defense Research and Engineering, the Under Secretary of Defense for AT&L placed the new offices under his organization because the department wanted to put additional emphasis on systems engineering and developmental testing prior to the start of a weapons acquisition program. The director believes this is already occurring and that both offices will continue to have a strong relationship with acquisition programs even though they do not report directly to an organization with significant involvement with major defense acquisition programs. However, many current and former DOD systems engineering and developmental testing officials we spoke with believe the offices should be closely linked to weapon acquisition programs because most of their activities are related to those programs. Similarly, the Defense Science Board recommended that a developmental testing office be established and report directly to an organization that has significant involvement with major defense acquisition programs. In addition, officials we spoke with believe several other significant challenges, including those related to staffing and the culture of the Defense Research and Engineering organization, are already negatively affecting the offices’ effectiveness. DOD has not established any performance criteria that would help gauge the success of the new directors’ offices, making it difficult to determine if the offices are properly aligned within the department or if the Reform Act is having an impact on program outcomes. After the passage of the Reform Act, DOD considered several options on where to place the new offices of the Director of Systems Engineering and Director of Developmental Test and Evaluation. According to an official who helped evaluate potential alternatives, DOD could have aligned the offices under AT&L in several different ways (see fig. 2). For example, the offices could have reported directly to the Under Secretary of AT&L or indirectly to the Under Secretary of AT&L either through the Assistant Secretary of Defense (Acquisition) or the Director of Defense Research and Engineering. DOD decided to place the offices under the Director of Defense Research and Engineering, an organization that previously primarily focused on science and technology issues. Under Secretary of Defense for Acquisition, Technology & Logistics (USD AT&L) The Director of Defense Research and Engineering is aware of the challenges of placing the offices under an organization whose primary mission is to develop and transition technologies to acquisition programs, but believes that the current placement makes sense given congressional and DOD desires to place more emphasis on activities prior to the start of a new acquisition program. He stated that the addition of systems engineering and developmental testing not only stretches the role and mission of his organization, but also strengthens the organization’s role in acquisitions because it helps give the organization’s research staff another point of view in thinking about future technologies and systems. He plans for the offices to perform both assessment and advisory activities, including: providing risk assessments of acquisition programs for the Defense Acquisition Board, continuing to help programs succeed by providing technical insight and assisting the programs in the development of the systems engineering plan and the test and evaluation master plan, and educating and assisting researchers to think through new concepts or technologies using systems engineering to inform fielding and transition strategies. According to the Director of Defense Research and Engineering, the offices are already performing some of these functions. For example, the new directors have provided technical input to the Defense Acquisition Board on various weapons programs. The director stated the systems engineering organization is reviewing manufacturing processes and contractor manufacturing readiness for weapons programs such as the Joint Strike Fighter. In addition, a developmental testing official stated they are assisting the Director of Defense Research and Engineering Research Directorate in conducting technology readiness assessments and helping programs identify the trade spaces for testing requirements while reviewing the test and evaluation master plan. The director believes the value of having the offices perform both assessment and advisory activities is that they can look across the acquisition organization and identify programs that are succeeding from a cost, schedule, and performance perspective and identify common threads or trends that enable a program to succeed. Conversely, they could identify common factors that make programs fail. The Director of Defense Research and Engineering identified three challenges that he is trying to address in order for systems engineering and developmental testing to have a more positive influence on weapon system outcomes. First, the director would like to improve the technical depth of the systems engineering and developmental testing offices. Both functions have atrophied over the years and need to be revitalized. This will require the offices to find highly qualified people to fill the positions, which will not be easy. Second, the director wants to improve the way the Defense Research and Engineering organization engages with other DOD organizations that are involved in weapon system acquisition. The director noted that there are a lot of players and processes involved in weapon acquisition and that the systems engineering office can play a large role in facilitating greater interaction. Third, the director would like the Defense Research and Engineering organization to find better ways to shape, engage with, contract with, and get information from the defense industrial base. In addition to the three challenges, it will also be difficult to determine whether the two new offices are having a positive impact on weapon system outcomes. The Directors of Systems Engineering and Developmental Test and Evaluation are not reporting the number of recommendations implemented by program managers or the impact the recommendations have had on weapon programs, which would allow senior leaders to gauge the success of the two offices. This type of information could help the Under Secretary of AT&L determine if the offices need to be placed under a different organization, if the offices need to place more emphasis on advisory or assessment activities, and if the Reform Act is having an impact on program outcomes. The vast majority of current and former DOD systems engineering and test officials we spoke with were opposed to the placement of the offices under the Director of Defense Research and Engineering. Their chief concern is that the mission of the Director of Defense Research and Engineering organization is primarily focused on developing new technologies and transitioning those technologies to acquisition programs. While they recognize that the systems engineering and developmental testing offices need to be involved in activities prior to the official start of a new weapons program, they believe the offices’ expertise should be focused on helping DOD acquisition programs establish doable requirements given the current state of technologies, not on the technologies themselves. Therefore, they believe the offices would be more appropriately placed under the newly established offices of the Principal Deputy Under Secretary of Defense for AT&L or the Assistant Secretary of Defense for Acquisition, whose missions are more closely aligned with acquisition programs. Some officials we spoke with believe that a cultural change involving the focus and emphasis of the office of the Director of Defense Research and Engineering will have to take place in order for that organization to fully support its role in overseeing acquisition programs and improving the prominence of the two new offices within the department. However, these same officials believe that this cultural change is not likely to occur and that the Director of Defense Research and Engineering will continue to focus primarily on developing and transitioning new technologies to weapon programs. Therefore, the offices may not get sufficient support and resources or have the clout within DOD to effect change. One former systems engineering official pointed out that the historic association of systems engineering with the Director of Defense Research and Engineering does not bode well for the systems engineering office. Based upon his experience, the Director of Defense Research and Engineering’s focus and priorities resulted in a fundamental change in philosophy for the systems engineering mission, the virtual elimination of a comprehensive focus on program oversight or independent identification of technical risk, and a reduction in systems engineering resources. In short, he found that the Director of Defense Research and Engineering consistently focused on science and technology, in accordance with the organization’s charter, with systems engineering being an afterthought. Likewise, current and former developmental testing officials are concerned about the Director of Defense Research and Engineering’s support for developmental testing activities. They identified several staffing issues that they believe are key indicators of a lack of support. First, they pointed out that it took almost 9 months from the time the Director of Developmental Test and Evaluation office was established before a new director was in place compared to 3 months to place the Director of Systems Engineering. If developmental testing was a priority, officials believe that the Director of Defense Research and Engineering should have filled the position earlier. Second, test officials believe the Director of Developmental Test and Evaluation office needs to have about the same number of staff as the offices of the Director of Systems Engineering and the Director of Operational Test and Evaluation. According to officials, DOD currently plans to have about 70 people involved with developmental testing activities, 180 people for systems engineering, and 250 for operational testing. However, testing officials believe the offices should be roughly the same size given the fact that developmental testing will cover the same number of programs as systems engineering and operational testing and that roughly 80 percent of all testing activities are related to developmental tests, with the remaining 20 percent being for operational tests. Third, even though the Director of Developmental Test and Evaluation expects the office to grow to about 70 people by the end of fiscal year 2011, currently there are 30 people on board. The director believes there are a sufficient number of qualified people seeking positions and therefore the office could be ramped up more quickly. Finally, the Director of Developmental Test and Evaluation stated that his office has only one senior-level executive currently on staff who reports to him and that there are no plans to hire more for the 70-person organization. The director believes it is crucial that the organization have more senior-level officials because of the clout they carry in the department. The director believes that the lack of an adequate number of senior executives in the office weakens its ability to work effectively with or influence decisions made by other DOD organizations. Further, officials from other testing organizations, as well as the systems engineering office, indicated they have two or more senior executive-level employees. A May 2008 Defense Science Board report, which was focused on how DOD could rebuild its developmental testing activities, recommended that developmental testing be an independent office that reports directly to the Deputy Under Secretary of Defense (Acquisition and Technology). At that time, according to the report, there was no office within the Office of the Secretary of Defense with comprehensive developmental testing oversight responsibility, authority, or staff to coordinate with operational testing. In addition, the existing residual organizations lacked the clout to provide development test guidance and developmental testing was not considered to be a key element in AT&L system acquisition oversight. According to the study director, placing the developmental testing office under the Director of Defense Research and Engineering does not adequately position the new office to perform the oversight of acquisition programs. The military services, the Directors of Systems Engineering and Developmental Test and Evaluation, and we have identified a number of workforce and resource challenges that the military services will need to address to strengthen their systems engineering and developmental testing activities. For example, it is unclear whether the services have enough people to perform both systems engineering and developmental testing activities. Even though the services reported to the directors that they have enough people, they do not have accurate information on the number of people performing these activities. The Director of Developmental Test and Evaluation disagreed with the services’ assertions, but did not know how many additional people are needed. Service officials have also expressed concern about the department’s ability to train individuals who do not meet requisite certification requirements on a timely basis and being able to obtain additional resources to improve test facilities. The military services were required by the Reform Act to report on their plans to ensure that they have an adequate number of trained systems engineering and developmental testing personnel and to identify additional authorities or resources needed to attract, develop, train, and reward their staff. In November 2009, the military services submitted their reports to the respective directors within the Office of the Secretary of Defense on their findings. In general, the services concluded that even with some recruiting and retention challenges, they have an adequate number of personnel to conduct both systems engineering and developmental testing activities (see table 2 below). According to service officials, this determination was based on the fact that no program offices identified a need for additional staffing to complete these activities. The reports also stated the services generally have sufficient authorities to attract and retain their workforce. In DOD’s first annual joint report to Congress, the Director of Developmental Test and Evaluation did not agree with the military services’ assertion that they have enough staff to perform the full range of developmental testing activities. The director does not know how many more personnel are needed, but indicated that the office plans to work with the services to identify additional workforce needs. The Director of Systems Engineering agreed with the services’ reports that they have adequate staffing to support systems engineering activities required by current policy. According to the director, this was based on the 35,000 current personnel identified in the System Planning, Research Development, and Engineering workforce—a generic workforce category that includes systems engineering activities—as well as the services’ plans to hire over 2,500 additional personnel into this same workforce category over the next several years. Although not clearly articulated in the services’ reports, military service officials acknowledged that the personnel data in their reports may not be entirely accurate. For example, officials believe the systems engineering numbers identified in table 2 overstate the number of people actually performing systems engineering activities because that particular career field classification is a generic category that includes all types of engineers. The developmental test workforce shown in the table does not completely reflect the number of people who actually perform developmental testing activities because the information provided by the military services only identifies the personnel identified in the test and evaluation career field. Service officials told us that there are many other people performing these activities who are identified in other career fields. The Director of Developmental Test and Evaluation believes these other people may not be properly certified and that in the case of contractors, they do not possess certifications which are equivalent to the certification requirements of government personnel. This director plans to request another report from the services in fiscal year 2010. This report will address the overall workforce data; it will cover current staffing assigned to early test and evaluation activities, training, and certification concerns they have related to in-sourcing staff, rapid acquisition resource plans, and infrastructure needs for emerging technologies. The Director of Systems Engineering does not intend to request another report from the services. Nevertheless, each of the military services plans to increase its systems engineering workforce over the next several years. The exact number of personnel is uncertain because the services’ hiring projections relate to a general engineering personnel classification, not a specific systems engineering career field. The directors also identified challenges they believe the services will face in strengthening systems engineering and developmental testing activities. The Director of Systems Engineering pointed out that the services need to put greater emphasis on development planning activities, as called for by the Reform Act. The services are currently conducting these activities to some extent, but the director believes a more robust and consistent approach is needed. The Director of Developmental Test and Evaluation highlighted two other challenges facing the military services. First, the director would like to increase the number of government employees performing test and evaluation activities. The services experienced significant personnel cuts in these areas in the mid-1990s and has to rely on contractors to perform the work. DOD’s joint report to Congress noted that the Air Force in particular relies heavily on prime contractor evaluations and that this approach could lead to test results that are inaccurate, misleading, or not qualified, resulting in turn, in premature fielding decisions since prime contractors would not be giving impartial evaluations of results. The director believes there are a number of inherently governmental test and evaluation functions that produce a more impartial evaluation of results and that a desired end state would be one where there is an appropriate amount of government and contractor testing. Second, the director is concerned that DOD does not have the capacity to train and certify an estimated 800 individuals expected to be converted from contractor to government employees within the required time frame. While most of the contractors are expected to have some level of training and experience performing test activities, they probably will not meet certifications required of government employees because they have not had the same access to DOD training. In addition to those challenges recognized by the directors, we have identified other challenges we believe the services may face in implementing more robust systems engineering and developmental testing, including the following. According to the military services, they plan to meet hiring targets primarily through the conversion of contractors who are already performing those activities, but do not have plans in place to ensure that they have the right mixture of staff and expertise both now and in the future. DOD officials acknowledge that they do not know the demographics of the contractor workforce. However, they believe many contractors are often retired military with prior systems engineering experience. Therefore, while they may be able to meet short-term needs, there could be a challenge in meeting long-term workforce needs. Army test officials indicated that they have experienced a significant increase in their developmental testing workload since the terrorist attacks of September 2001, with no corresponding increase in staffing. As a result, personnel at their test ranges are working longer hours and extra shifts, which testing officials are concerned may affect their retention rates. Army officials also indicated that test ranges are deteriorating more quickly than expected and they may not have the appropriate funding to upgrade and repair the facilities and instrumentation. Test personnel are often operating in obsolete and outdated facilities that cannot meet test requirements, resulting in safety issues, potential damage to equipment, and degraded quality of life. DOD’s increased emphasis on fielding rapid acquisition systems may require the services to tailor their approach to systems engineering. According to an Air Force official, efforts that normally take months to complete for a more traditional acquisition program, have to be completed in a matter of weeks for rapid acquisition programs. DOD efforts to implement Reform Act requirements are progressing, but it will take some time before the results of these efforts can be evaluated. Current and former systems engineering and developmental testing officials offer compelling insights concerning the placement of the new directors’ offices under the Office of the Director of Defense Research and Engineering, but it is still too soon to judge how effective the offices will be at influencing outcomes on acquisition programs. The current placement of the offices may present several challenges that could hinder their ability to effectively oversee weapon system acquisition programs and ensure that risks are identified, discussed, and addressed prior to the start of a new program or the start of operational testing. Foremost among these potential challenges is the ability of the Director of Defense Research and Engineering to change the focus of the organization to effectively assimilate the roles and missions of the two new offices and then ensure that the offices are properly staffed and have the appropriate number of senior leaders. The mission of the office of the Director of Defense Research and Engineering has been to develop technology for weapon programs; its focus has not been to manage the technical aspects of weapon system acquisition programs. Ultimately, the real proof of whether an organization outside of the major defense acquisition program arena can influence acquisition program decisions and outcomes should be based on results. The directors’ offices have started to assess and report on the systems engineering and developmental testing activities on some of the major defense acquisition programs. They have also made recommendations and worked with program officials to help reduce risks on programs such as the EA-18G, Global Hawk, and the C-5 Reliability Enhancement and Reengining programs. However, guidance on the development and tracking of performance criteria that would provide an indication of how much risk is associated with a particular weapon system—such as those related to technology maturity, design stability, manufacturing readiness, concurrency of development and production activities, prototyping, and adequacy of program resources—has yet to be developed. Further, the directors are not reporting to Congress on the extent to which programs are implementing recommendations and the impact recommendations are having on weapon programs, which would provide some insight as to the impact the two offices are having on acquisition programs. Although not required by the Reform Act, this type of information could be useful for Congress to gauge the effectiveness of the directors’ offices. The military services, which face increasing demands to develop and field more reliable weapon systems in shorter time frames, may need additional resources and training to ensure that adequate developmental testing and systems engineering activities are taking place. However, DOD’s first joint annual report to Congress, which was supposed to assess the department’s organization and capabilities for performing systems engineering and developmental testing activities, did not clearly identify the workforce performing these activities, future workforce needs, or specific hiring plans. In addition, DOD’s strategy to provide the necessary training within the required time period to the large number of staff it plans to hire is unclear. Therefore, workforce and training gaps are unknown. In order to determine the effectiveness of the newly established offices, we recommend that the Secretary of Defense direct the Directors of Systems Engineering and Developmental Test and Evaluation to take the following five actions: Ensure development and implementation of performance criteria for systems engineering plans and developmental test and evaluation master plans, such as those related to technology maturity, design stability, manufacturing readiness, concurrency of development and production activities, prototyping, and the adequacy of program resources. Track the extent to which program offices are adopting systems engineering and developmental testing recommendations. Work with the services to determine the appropriate number of government personnel needed to perform the scope of systems engineering and developmental testing activities. Develop plans for addressing the training needs of the new hires and contractors who are expected to be converted to government personnel. Report to Congress on the status of these efforts in future joint annual reports required by the Reform Act. DOD provided us with written comments on a draft of this report. DOD concurred with each of the recommendations, as revised in response to agency comments. DOD’s comments appear in appendix I. Based upon a discussion with DOD officials during the agency comment period, we revised the first recommendation. Specifically, instead of recommending that the Directors of Systems Engineering and Developmental Test and Evaluation develop a comprehensive set of performance criteria that would help assess program risk, as stated in the draft report, we now recommend that the directors ensure the development and implementation of performance criteria for systems engineering plans and developmental test and evaluation master plans. The wording change clarifies the nature and scope of performance criteria covered by our recommendation and is consistent with Reform Act language that requires the directors to develop guidance on the development of detailed, measurable performance criteria that major acquisition programs should include in their systems engineering and developmental testing plans. According to DOD officials, the military services are then responsible for developing the specific criteria that would be used on their respective programs. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Bruce Thomas, Assistant Director; Cheryl Andrew; Rae Ann Sapp; Megan Hill; and Kristine Hassinger. | In May 2009, Congress passed the Weapon Systems Acquisition Reform Act of 2009 (Reform Act). The Reform Act contains a number of systems engineering and developmental testing requirements that are aimed at helping weapon programs establish a solid foundation from the start of development. GAO was asked to examine (1) DOD's progress in implementing the systems engineering and developmental testing requirements, (2) views on the alignment of the offices of the Directors of Systems Engineering and Developmental Test and Evaluation, and (3) challenges in strengthening systems engineering and developmental testing activities. In conducting this work, GAO analyzed implementation status documentation and obtained opinions from current and former DOD systems engineering and testing officials on the placement of the two offices as well as improvement challenges. DOD has implemented or is implementing the Reform Act requirements related to systems engineering and developmental testing. Several foundational steps have been completed. For example, new offices have been established, directors have been appointed for both offices, and the directors have issued a joint report that assesses their respective workforce capabilities and 42 major defense acquisition programs. Many other requirements that have been implemented will require sustained efforts by the directors' offices, such as approving systems engineering and developmental testing plans, as well as reviewing these efforts on specific weapon programs. DOD is studying the option of allowing the Director, Developmental Test and Evaluation, to serve concurrently as the Director of the Test Resource Management Center. The directors have not yet developed joint guidance for assessing and tracking acquisition program performance of systems engineering and developmental testing activities. It is unclear whether the guidance will include specific performance criteria that address long-standing problems and program risks, such as those related to concurrency of development and production activities and adequacy of program resources. Current and former systems engineering and developmental testing officials offered varying opinions on whether the new directors' offices should have been placed under the Director of Defense Research and Engineering organization--an organization that focuses primarily on developing and transitioning technologies to acquisition programs. The Director of Defense Research and Engineering believes aligning the offices under his organization helps address congressional and DOD desires to increase emphasis on and strengthen activities prior to the start of a new acquisition program. Most of the officials GAO spoke with believe the two offices should report directly to the Under Secretary for Acquisition, Technology and Logistics or otherwise be more closely aligned with acquisition programs because most of their activities are related to weapon programs. They also believe cultural barriers and staffing issues may limit the effectiveness of the two offices under the current organizational structure. Currently, DOD is not reporting to Congress on how successfully the directors are effecting program changes, making it difficult to determine if the current placement of the offices makes sense or if the Reform Act is having an impact. The military services face a number of challenges as they try to strengthen systems engineering and developmental testing activities on acquisition programs. Although the services believe they have enough staff to perform both of these activities, they have not been able to clearly identify the number of staff that are actually involved. The Director of Developmental Test and Evaluation does not believe the military services have enough testing personnel and is concerned that DOD does not have the capacity to train the large influx of contractors that are expected to be converted to government employees. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As of September 30, 1996, DOD reported the value of its secondary inventory—consumable items and reparable parts—at $68.5 billion. Consumable items, such as clothing and medical supplies, are managed primarily by DLA. Reparable parts are generally expensive items that can be fixed and used again, such as hydraulic pumps, navigational computers, wing sections, and landing gear. Each military service manages reparable parts that are used for their operations. These management functions include determining how many parts will be needed to support operations, purchasing new parts, and deciding when broken parts need to be repaired. As shown in figure 1, aircraft reparable parts represent an estimated 59 percent of DOD’s secondary inventory. To provide reparable parts for their aircraft, the military services use extensive logistics systems that were based on management processes, procedures, and concepts that have evolved over time but are largely outdated. Each service’s logistics system, often referred to as a logistics pipeline, consists of a number of activities that play a role in providing aircraft parts where and when they are needed. These activities include the purchase, storage, distribution, and repair of parts, which together require billions of dollars of investment in personnel, equipment, facilities, and inventory. In our recent reports on the Army, the Navy, and the Air Force logistics pipelines, we highlighted many of the problems and inefficiencies associated with the services’ current logistics systems. Findings from these reports are summarized in appendix I. DOD must operate its logistics activities within the framework of various legislative provisions and regulatory requirements. Various legislative provisions govern the size, composition, and allocation of depot repair workloads between the public and private sectors. For example, the allocation of the depot maintenance workload between the public and private sectors is governed by 10 U.S.C. 2466. According to the statute, not more than 50 percent of the funds made available for depot-level maintenance and repair can be used to contract for performance by nonfederal government personnel. Other statutes that affect the extent to which depot-level workloads can be converted to private sector performance include (1) 10 U.S.C. 2469, which provides that DOD-performed depot maintenance and repair workloads valued at not less than $3 million cannot be changed to contractor performance without a public-private competition and (2) 10 U.S.C. 2464, which provides that DOD activities should maintain a government-owned and operated logistics capability sufficient to ensure technical competence and resources necessary for an effective and timely response to a national defense emergency. Another provision that may affect future DOD logistics operations is 10 U.S.C. 2474, added to the United States Code by section 361 of the Fiscal Year 1998 National Defense Authorization Act. Section 2474 requires the Secretary of Defense to designate each depot-level activity as a Center of Industrial and Technical Excellence for certain functions. The act further requires the Secretary to establish a policy to encourage the military services to reengineer their depot repair processes and adopt best business practices. According to section 2474, a military service may conduct a pilot program, consistent with applicable requirements of law, to test any practices that the military service determines could improve the efficiency and effectiveness of depot-level operations, improve the support provided by the depots for the end user, and enhance readiness by reducing the time needed to repair equipment. Further, efforts to outsource functions other than depot-level maintenance and repair must be accomplished in accordance with the requirement of the Office of Management and Budget Circular A-76, various applicable provisions of chapter 146 of title 10 of the United States Code, as well as recurring provisions in the annual DOD Appropriations Act. In November 1997, the Secretary of Defense announced the Defense Reform Initiative, which seeks to reengineer DOD support activities and business practices by incorporating many business practices that private sector companies have used to become leaner, more agile, and highly successful. The initiative calls for adopting modern business practices to achieve world-class standards of performance in DOD operations. The Secretary of Defense stated that reforming DOD support activities is imperative to free up funds to help pay for high priorities, such as weapons modernization. We previously reported that several commercial airlines have cut costs and improved customer service by streamlining their logistics operations. The most successful improvements include using highly accurate information systems to track and control inventory; employing various methods to speed the flow of parts through the pipeline; shifting certain inventory tasks to suppliers; and having third parties handle parts repair, storage, and distribution functions. One airline, British Airways, has substantially improved its logistics operations over a 14-year period. British Airways approached the process of change as a long-term effort that requires steady vision and a focus on continual improvement. Although the airline has reaped significant gains from improvements, it continued to reexamine operations and make improvements to its logistics system. Adopting practices similar to British Airways and other commercial airlines could help DOD’s repair pipelines become faster and more responsive to customer needs. British Airways used a supply-chain management approach to reengineer its logistics system. With this approach, the various activities encompassed by the logistics pipeline were viewed as a series of interrelated processes rather than isolated functional areas. For example, when British Airways began changing the way parts were purchased from suppliers, it considered how those changes would affect mechanics in repair workshops. British Airways officials described how a combination of supply-chain improvements could lead to a continuous cycle of improvement. For example, culture changes, improved data accuracy, and more efficient processes all lead to a reduction in inventories and complexity of operations. These reductions, in turn, improve an organization’s ability to maintain accurate data. The reductions also stimulate continued change in culture and processes, both of which fuel further reductions in inventory and complexity. Despite this integrated approach, British Airways’ transformation did not follow a precise plan or occur in a rigid sequence of events. Rather, according to one manager, airline officials took the position that doing nothing was the worst option. After setting overall goals, airline officials gave managers and employees the flexibility to continually test new ideas to meet those goals. Four specific practices used by British Airways and other airlines that appear to be suited to DOD operations to the extent they can be implemented within the existing legislative and regulatory framework include the (1) prompt repair of items, (2) reorganization of the repair process, (3) establishment of partnerships with key suppliers, and (4) use of third-party logistics services. These initiatives are interrelated and, when used together, can help maximize a company’s inventory investment, decrease inventory levels, and provide a more flexible repair capability. They appear to address many of the same problems DOD faces and represent practices that could be applied to its operations. We recommended in our reports that DOD test these concepts in an integrated manner to maximize their potential benefits. Certain airlines begin repairing items as quickly as possible, which prevents the broken items from sitting idle for extended periods. Minimizing idle time helps reduce inventories because it lessens the need for extra “cushions” of inventory to cover operations while parts are out of service. In addition, repairing items promptly promotes flexible scheduling and production practices, enabling maintenance operations to respond more quickly as repair needs arise. Prompt repair involves inducting parts into maintenance shops soon after broken items arrive at repair facilities. However, prompt repair does not mean that all parts are fixed. The goal is to quickly fix only those parts that are needed. One commercial airline routes broken items directly to holding areas next to repair shops, rather than to stand-alone warehouses, so that mechanics can quickly access these broken parts. The holding areas also give mechanics better visibility of any backlog. It is difficult to specifically quantify the benefits of repairing items promptly because that practice is often used with other ones to speed up pipeline processes. One airline official said, however, that the airline has kept inventory investment down partly because it does not allow broken parts to remain idle. One approach to accelerate the repair process and promote flexibility in the repair shop is the “cellular” concept. Under this concept, an airline moved all of the resources that are needed to repair broken parts, such as tooling and support equipment, personnel, and inventory, into one location or repair center “cell.” This approach simplifies the repair of parts by eliminating the time-consuming exercise of routing parts to workshops in different locations. It also ensures that mechanics have the technical support to ensure that operations run smoothly. In addition, because inventory is placed near workshops, mechanics have quick access to the parts they need to complete repairs more quickly. British Airways adopted the cellular approach after determining that parts could be repaired as much as 10 times faster using this concept. Figure 2 shows a repair cell used in British Airways’ maintenance center at Heathrow Airport. Another airline that adopted this approach in its engine-blade repair shop was able to reduce repair time by 50 to 60 percent and decrease work-in-process inventory by 60 percent. Several airlines and manufacturers have worked with suppliers to improve parts support and reduce overall inventory. Two approaches—the use of local distribution centers and integrated supplier programs—specifically seek to improve the management and distribution of consumable items, such as nuts, bolts, and fuses. These approaches help ensure that the consumable items for repair and manufacturing operations are readily available, which prevents parts from stalling in the repair process and helps speed up repair time. In addition, by improving management and distribution methods, such as streamlined ordering and fast deliveries, these approaches enable firms to delay the purchase of inventory until a point that is closer to the time it is needed. Firms, therefore, can reduce their stocks of “just-in-case” inventory. Local distribution centers are supplier-operated facilities that are established near a customer’s operations and provide deliveries of parts within 24 hours. One airline that used this approach has worked with key suppliers to establish more than 30 centers near its major repair operations. These centers receive orders electronically and, in some cases, handle up to eight deliveries a day. Airline officials said that the ability to get parts quickly has contributed to repair time reductions. In addition, the officials said that the centers have helped the airline cut its on-hand supply of consumable items nearly in half. Figure 3 shows a local distribution center, located at Heathrow Airport, that is operated by the Boeing Company. Integrated supplier programs involve shifting inventory management functions to suppliers. Under this arrangement, a supplier is responsible for monitoring parts usage and determining how much inventory is needed to maintain a sufficient supply. The supplier’s services are tailored to the customer’s requirements and can include placing a supplier representative in customer facilities to monitor supply bins at end-user locations, place orders, manage receipts, and restock bins. Other services can include 24-hour order-to-delivery times, quality inspection, parts kits, establishment of data interchange links and inventory bar coding, and vendor selection management. One manufacturer that used an integrated supplier received parts 98 percent of the time within 24 hours of placing an order, which enabled the manufacturer to reduce inventories for these items by $7.4 million—an 84-percent reduction. Figure 4 illustrates how an integrated supplier could reduce or eliminate the need for at least three inventory storage locations in a typical DOD repair facility. Third-party logistics providers can be used to reduce costs and improve performance. Third-party firms take on responsibility for managing and carrying out certain logistics functions, such as storage and distribution. As a result, companies can reduce overhead costs because they no longer need to maintain personnel, facilities, and other resources that are required to do these functions in house. Third-party firms also help companies improve various aspects of their operations because these providers can offer expertise that companies often do not have the time or the resources to develop. For example, one airline contracts with a third-party logistics provider to handle deliveries and pickups from suppliers and repair vendors, which has improved the reliability and speed of deliveries and reduced overall administrative costs. The airline receives most items within 5 days, which includes time-consuming customs delays, and is able to deliver most items to repair vendors in 3 days. In the past, deliveries took as long as 3 weeks. In addition, third-party providers can assume other functions. One third-party firm that we visited, for example, can assume warehousing and shipping responsibilities and provide rapid transportation to speed parts to end users. The company can also pick up any broken parts from a customer and deliver them to the source of repair within 48 hours. In addition, this company maintains the data associated with warehousing and in-transit activities, offering real-time visibility of assets. If DOD were to adopt a combination of best practices, similar to those employed by commercial airlines, the time items spend in the services’ repair pipelines could be substantially reduced. For example, the cellular concept enables a repair shop to respond more quickly to different repair needs. An integrated supplier can provide the consumable parts needed to complete repairs faster and more reliably. Both of these concepts are needed to establish an agile repair capability, which in turn enables a company to repair items more promptly. A much faster and responsive repair pipeline would allow DOD to buy, store, and distribute significantly less inventory and improve customer service. For example, an Army-sponsored RAND study noted that reducing the repair time for one helicopter component from 90 to 15 days would reduce inventory requirements for that component from $60 million to $10 million. Figures 5 and 6 uses the Army’s pipeline for reparable parts to illustrate the potential impact that the integrated use of best practices would have on DOD’s logistics system. Figure 5 illustrates the current repair pipeline at Corpus Christi Army Depot, including the average number of days it took to move the parts we examined through this pipeline and the flow of consumable parts into the repair depot. The consumable parts flow includes hardware inventory stored in DLA warehouses and repair depot inventory, which in 1996 totaled $5.7 billion and $46 million, respectively. Despite this investment in inventory, the supply system was completely filling customer orders only 25 percent of the time. Also, as of August 1996, mechanics had more than $40 million in parts on backorder, 34 percent of which was still unfilled after 3 months. In addition, reparable parts flowing through this system took an average of 525 days to complete the process. Figure 6 illustrates a modified Army system, incorporating the use of an integrated supplier for consumable items, third-party logistics services, parts induction soon after they arrive at the depot, and cellular repair shops. If the military services were to adopt these practices, they could substantially reduce the number of days for a part to flow through the repair pipeline and reduce or eliminate much of the inventory in DLA and repair depot storage locations. DOD’s application of concepts such as third-party logistics and integrated suppliers, however, may require a cost comparison between government and commercial providers in accordance with Office of Management and Budget Circular A-76. This circular requires, in general, that a public-private competition must be held before contracting out of functions, activities, and services that were being accomplished by more than 10 DOD employees. Our work has consistently shown that this process is cost-effective because competition generates savings—usually through a reduction in personnel—whether the competition is won by the government or the private sector. Each of the military services has programs underway to improve logistics operations and make its processes faster and more flexible. The Army established its Velocity Management program to eliminate unnecessary steps in the logistics pipeline that delay the flow of parts through the system. The Navy is using a regionalization concept to reduce redundant capabilities in supply and maintenance and is testing a direct delivery concept for a few component parts. The Air Force established its Lean Logistics initiative to dramatically improve logistics processes. Although these initiatives have been underway for several years, the results are limited, and the overall success of these programs is uncertain. In January 1995, the Army established its Velocity Management program to develop a faster, more flexible, and more efficient logistics pipeline. The program’s goals, concepts, and top management support parallel improvement efforts found in private sector companies. The overall goal of the program is to eliminate unnecessary steps in the logistics pipeline that delay the flow of parts through the system. The Army plans to achieve this goal in a similar manner as the private sector: by changing its processes and not by refining the existing system. The Army’s Vice Chief of Staff has strongly endorsed the program as a vehicle for making dramatic improvements to the current logistics system. In anticipation of these improvements, the Army has reduced its operating budgets for fiscal years 1998 through 2003 by $156.5 million. The Velocity Management program consists of Army-wide process improvement teams for the following four areas: ordering and shipping of parts, the repair cycle, inventory levels and locations (also known as stockage determination), and financial management. For each of these areas, the Army is examining its current processes and attempting to identify ways to improve them. The Army’s implementation strategy for these improvement areas includes three phases: defining the process, measuring process performance, and improving the process. As shown in table 1, the four improvement areas are in various implementation phases. The order and shipping improvement area is in phase 3 and the farthest along in the implementation process. In this area, the Army has reduced the time it takes to order and deliver parts to a customer located in the United States from approximately 22 to 11 days, or by 50 percent. According to Army officials, this improvement was achieved by automating the ordering process and having delivery trucks dedicated to servicing a single customer. The Army plans to continue work on other functions in this area, such as the receiving process. The stockage determination and repair cycle initiatives are both in phase 2. According to Army officials, these improvement areas have not advanced as quickly as planned due to difficulties in obtaining reliable data to measure the current processes. Also, Army officials have not precisely determined what metrics to use for measuring future improvements. The financial management area, the last initiative to be started, is currently in phase 1. The Navy has three major improvement efforts underway that are aimed at reducing infrastructure costs and streamlining operations. The first initiative, called regional supply, consolidates decentralized supply management functions into seven regionally based activities. Under the old system, naval bases, aviation repair depots, and shipyards each had supply organizations to manage needed parts. These activities often used different information systems and business practices and their own personnel and facilities. This initiative does not consolidate inventories into fewer storage locations. The consolidation is intended to provide central management of spare parts for these individual operations, improve parts visibility, and reduce the overhead expenses associated with separate management functions. The Navy hopes that the centralized management approach will lead to a better sharing among locations and reductions in inventories. In fiscal year 1997, the Navy reported inventory reductions of $4.9 million through its regional supply program, and it expects to reduce inventories by an additional $24 million in fiscal year 1998. The Navy expects that 90 percent of the supply management consolidations will be completed by the end of fiscal year 1998. The second initiative, called regional maintenance, similarly identifies redundant maintenance capabilities and consolidates these operations into regionally based repair facilities. For example, in one region the Navy is consolidating 32 locations used to calibrate maintenance test equipment into 4 locations. The regional maintenance program is mainly focused on reducing infrastructure costs, but its other objectives include improving maintenance processes, integrating supply support and maintenance functions, and providing compatible information systems. Through fiscal year 1996, the Navy identified a total of 102 regional maintenance initiatives: 55 were started in fiscal year 1997, and 47 are to be implemented between fiscal years 1998 and 2001. The Navy estimates that its regional maintenance efforts will save $944 million between fiscal years 1994 and 2001. We recently reported that, although the Navy has made progress in achieving its infrastructure streamlining objective under regional maintenance, the progress thus far has not been as great as anticipated and challenges remain for accomplishing future plans. Full implementation, initially projected for fiscal year 1999, is now projected for fiscal year 2000 and could take longer. Many of the initiatives identified have not been completed, and projected savings are not being achieved. For example, one initiative to consolidate planning and engineering functions for certain repairs is not progressing as planned, delaying planned personnel reductions and affecting up to $92 million in savings projected to occur between fiscal years 1998 and 2001. The Navy has classified many of its initiatives as high risk because of barriers to implementation, including institutional resistance to change, inadequate information systems, and poor visibility over maintenance-related costs. The Navy’s third initiative, called direct vendor delivery, is a logistics support technique intended to reduce the costs of the inventory management and distribution functions. Under this initiative, a contractor (typically an original equipment manufacturer) will be responsible for repairing, storing, and distributing weapon system components. The contractor agrees to meet certain delivery timeframes and supply availability rates for the components. When a component fails at an operating location, it is sent directly to the contractor rather than to a Navy repair facility. The contractor in turn ships a replacement part back to the operating location. If a future demand for the item is anticipated, then the contractor fixes the broken component so it can be used again. According to the Navy, the direct vendor delivery concept will motivate the contractor to increase the reliability of the component so it needs to be repaired less frequently, which may reduce the component’s life-cycle costs. The direct vendor delivery concept is in the early stages of development. As of January 1998, the Navy had placed only 3 subsystems, consisting of 96 components, under contract. The value of these three contracts represent about 1 percent of the Navy’s fiscal year 1998 purchase and repair budget. The Navy plans, however, to apply this concept to additional weapon system components in the future. In 1994, the Air Force initiated a reengineering effort called Lean Logistics to dramatically improve logistics processes. The Air Force describes Lean Logistics as the cornerstone of all future logistics system improvements. This effort, spearheaded by the Air Force Materiel Command, is aimed at improving service to the end user while reducing pipeline time, excess inventory, and other logistics costs. The Air Force expects to save $948 million in supply costs between fiscal years 1997 and 1999 as a result of Lean Logistics initiatives. Under Lean Logistics, the Air Force developed a program to redesign the current repair pipeline. In June 1996, the Air Force began testing certain concepts at 10 repair shops, and the tests involve less than 1 percent of the Air Force’s inventory items. The concepts include repairing items quickly after they break, using premium transportation to rapidly move parts, organizing support (supply and repair) personnel into teams, and deploying new information systems to better prioritize repair actions and track parts. Each shop tested some of these concepts and identified system improvements needed to adopt these practices on a broader scale. As part of its demonstration projects, the Air Force tracked overall performance in four general areas: customer impact, responsiveness to the customer, repair depot efficiency, and operating costs. According to an October 1997 cost-benefit analysis of these projects, the tests were not a complete success. For example, 70 percent of the shops showed improvement in depot repair efficiency, but only 10 percent of the shops showed improvements in improving the responsiveness to the customer. Also, three of the four performance areas showed mixed results for 50 percent or more of the shops. According to the Air Force analysis, full implementation of the concepts may need to be re-evaluated and refined to achieve desired improvements in customer service and operating costs. Table 2 shows the impact of the demonstration projects on the four performance areas. Notwithstanding the results of the demonstration projects, the Air Force began expanding these concepts servicewide in April 1997 and plans to complete this effort by the spring of 1998. According to the Air Force, the concepts will be refined as implementation continues. The military service’s current improvement efforts could be expanded to include a wider application of the best practices discussed in this report. In addition, the services have not established specific locations where a combination of several practices could be tested to achieve maximum benefits. These expanded efforts would be consistent with recent legislative provisions and the Defense Reform Initiative, which encourage the adoption of best business practices. However, a wider application of best practices by DOD must be accomplished within the current legislative framework and regulatory requirements. Our previous reports recommended the testing and implementation of best practices, specifically, prompt repair of items, cellular repair, supplier partnerships, third-party logistics, as well as an integrated test of these practices. The Navy and the Air Force have initiated programs to adopt certain forms of supplier partnerships, and the Air Force is pursuing the prompt repair of items throughout its operations. Table 3 summarizes the status of the services’ efforts in implementing best practices. As part of its Lean Logistics program, the Air Force has adopted the concept of prompt repair of items to help speed the flow of parts through the repair process. In February 1997, the Air Force also began using a prime vendor program to support the C-130 propeller repair shop at the Warner Robins Air Logistics Center. In fiscal year 1998, the Air Force plans to expand the prime vendor program at Warner Robins and begin programs at two other Air Force repair depots. The Navy plans to test the prime vendor concept at two depots during 1998. As of April 1997, the Army was using the cellular repair concept at two maintenance shops in the Corpus Christi Army depot. The Army, however, has not initiated any additional tests of the practices recommended in our reports at the Corpus Christi depot. Finally, none of the services have developed a plan to combine these new practices at one facility. In commenting on a draft of this report, DOD highlighted additional initiatives that it believes demonstrate the use of best commercial practices. For example, the Army is pursuing an initiative to rapidly repair 20 different circuit cards at two Army depots and return the cards using premium transportation. The Army plans to expand this concept later this year to engine components. DOD also highlighted Navy efforts to reduce the administrative lead times involved in repairing maritime parts and have a third-party provider build repair kits for hydraulic parts. In addition, DOD cited an Air Force initiative related to the contractor support for certain C-17 aircraft parts. Under this arrangement, the contractor is responsible for interim contractor support, depot repair, materiel and program management, and system modifications. Section 395 of the National Defense Authorization Act for Fiscal Year 1998 requires the Director of DLA to develop and submit to Congress a schedule for implementing best practices for the acquisition and distribution of categories of consumable-type supplies and equipment listed in the section. However, each military service manages reparable parts that are used in its operations; DLA stores and distributes these parts and manages consumable items. Each service and DLA, therefore, would be responsible for developing and implementing a strategy to adopt best practices for the items they manage if section 395 were broadened to include reparable parts. Our work shows it is feasible for the list of items covered by section 395 to be expanded to include reparable parts. For example, each of the services and DLA have initiatives underway designed to improve their logistics operations by adopting best practices. Our reports identify additional best practices that present opportunities for DOD to build on these improvement efforts. However, if section 395 were expanded, the responsibility for the development and submission of a schedule to implement these practices would go beyond the purview of the Director of DLA. Thus, expanding the list of items covered by the provisions included in section 395 would also appear to warrant broadening the responsibility for responding to the legislation to include the military services. Our previous reports recommended that DOD test and adopt best practices where feasible; therefore, we are not repeating those recommendations in this report. However, testing a combination of several key best practices is an option that DOD has yet to explore as it considers the extent to which successful techniques used in the private sector could be applied to its logistics operations. This action would be consistent with recently enacted Centers of Industrial and Technical Excellence legislation and the Defense Reform Initiative. This wider application of best practices by DOD must be accomplished within the framework of existing legislative and regulatory requirements. If Congress decides it wants to expand the provisions of section 395 to include reparable parts, it may wish to consider (1) broadening the responsibility for responding to this legislation to include the military services and (2) developing provisions, similar to those in section 395, to encourage DOD to test combinations of best practices using a supply-chain management approach. In written comments on a draft of this report, DOD agreed that further progress is possible in using best practices for reparable parts. However, DOD has concerns in two areas. First, DOD believed that our draft report did not include all ongoing initiatives by the military services to adopt best business practices in the management of reparable parts. Second, DOD did not agree with our Matters for Congressional Consideration that the Congress may wish to consider developing statutory guidance related to best practices for reparable parts. DOD believed that, because of its actions underway, statutory guidance is not needed. DOD’s comments appear in appendix II. We incorporated several of the examples DOD provided into our report. However, some of these initiatives, particularly the newly awarded contract for C-17 aircraft support, involve integrated supplier support and third-party logistics predominately on the part of the contractor. Our past work and this report have been concerned with efforts to improve the existing in-house repair pipeline through the use of proven best practices adopted in the private sector, especially for aircraft parts, once the decision has been made to keep the repair function at public facilities. This C-17 contract represents a different arrangement and we are not in a position to comment on the merits of that approach. With regard to the Matters for Congressional Consideration, our intent is to highlight two actions that we believe may be useful to Congress if it decides to expand section 395 to include reparable parts. Therefore, we modified this section to clarify our intent. We used information from our three prior reports that compared Army, Navy, and Air Force logistics practices to those of commercial airlines. For these reports, we examined operations at 20 DOD locations involved in the logistics pipeline. At these locations, we discussed with supply and maintenance personnel the operations of DOD’s current logistics system, customer satisfaction, planned improvements to the logistics system, and the potential application of private sector practices to DOD operations. We also reviewed and analyzed detailed information on inventory levels and usage, repair times, supply effectiveness and response times, and other related logistics performance measures. Unless otherwise noted, inventory values reflect DOD’s standard valuation methodology, in which excess inventory is reported at an estimated salvage value and reparable parts requiring repair are reduced by an average estimate of repair costs. We also used information from our reports to identify leading commercial practices. This information, which was collected by making an extensive literature search, and through detailed examinations and discussions of logistics practices with officials from British Airways, United Airlines, Southwest Airlines, American Airlines, Federal Express, Boeing, Northrop-Grumman Corporation, and Tri-Star Aerospace. We also participated in roundtable discussions and symposiums with recognized leaders in the logistics field to obtain information on how companies are applying integrated approaches to their logistics operations. We reviewed documents and interviewed officials on DOD’s policies, practices, and efforts to improve its logistics operations. We contacted officials at the Office of the Deputy Under Secretary of Defense for Logistics, Washington, D.C.; Army Headquarters, Washington, D.C.; Army Materiel Command, Alexandria, Virginia; Naval Supply Systems Command, Mechanicsburg, Pennsylvania; Naval Inventory Control Point, Mechanicsburg, Pennsylvania; Air Force Headquarters, Washington, D.C.; and Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio. Also, officials at these locations provided us with detailed information on their efforts to adopt the specific best practices we recommended in prior reports. We conducted our review from December 1997 to January 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to other congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Directors of the Defense Logistics Agency and the Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. The Department of Defense’s (DOD) depot repair pipelines for reparable parts are slow and inefficient. Since February 1996, we have issued three reports that compared commercial logistics practices with similar Army, Navy, and Air Force operations for reparable aircraft parts. In these reports, we highlighted four factors that contributed to the services’ slow and inefficient repair pipelines. These factors are (1) broken reparable parts move slowly between field units and a repair depot, (2) reparable parts are stored in warehouses for several months before and after they are repaired, (3) work processes at repair depots are inefficiently organized, and (4) consumable parts are not frequently available to mechanics when needed. As a result, the services can spend several months or even years to repair and distribute repaired parts to the end user. The amount of time it takes to repair parts is important because DOD must invest in enough inventory to resupply units with serviceable parts during the time it takes to move and repair broken parts. In April 1997, we reported that the Army’s current repair pipeline, characterized by a $2.6-billion investment in aviation parts, is slow and inefficient. To calculate the amount of time the Army system takes to repair and distribute parts using the current depot repair process, we judgmentally selected 24 types of Army aviation parts and computed the time the parts spent in four key segments of the repair process. The key segments were (1) preparing and shipping the parts from the bases to the depot, (2) storing the parts at the depot before induction into the repair shop, (3) repairing the parts, and (4) storing the parts at the depot before being shipped to a field unit. The parts we selected took an average of 525 days to complete the repair process. The fastest time the Army took to complete any of the four pipeline segments was less than 1 day, but the slowest times ranged from 887 to more than 1,000 days. Table I.1 details the fastest, slowest, and average times the Army needed to complete each of the four pipeline segments. A comparison of the Army’s engineering estimate of the repair time that should be needed to complete repairs with the actual amount of time taken is a measure of repair process efficiency. Of the 525-day average pipeline time from our sample, the Army estimates that an average of 18 days should be needed to repair items. The remaining 507 days, or 97 percent of the total time, was spent transporting or storing parts or was due to unplanned repair delays. Another measure of repair process efficiency is a calculation of how often an organization uses its inventory, called the turnover rate. The higher the turnover rate, the more often a company is utilizing its inventory. At British Airways, the inventory turnover rate for reparable parts was 2.3 times each year. In comparison, we calculated that the Army’s turnover rate for fiscal year 1995 repairs was 0.4 times, or about 6 times slower than British Airways. In July 1996, we reported that the Navy’s system, characterized by a $10 billion inventory of reparable parts, is slow and complex and often does not respond quickly to customer needs. For example, customers wait an average of 16 days at operating bases and 32 days on aircraft carriers to receive parts from the wholesale system. If the wholesale system does not have the item in stock, customers wait over 2-1/2 months. Many factors contribute to this situation, but among the most prominent is a slow and complex repair pipeline. Within this pipeline, broken parts can pass through as many as 16 steps, taking as long as 4 months, before they are repaired at a repair depot and are available again for use. Specific problems that prevent parts from flowing quickly through the pipeline include a lack of consumable parts needed to complete repairs, slow distribution, and inefficient repair practices. For example, the Navy’s practice of routing parts through several workshops at repair depots increases the time needed to complete repairs. One item we examined had a repair time of 232 hours, only 20 hours of which was spent actually repairing the item. The remaining 212 hours, or 91 percent of the total time, was spent handling and moving the part to different locations. In contrast, leading firms in the airline industry, including British Airways, hold minimum levels of inventory that can turn over four times as often as the Navy’s. Parts are more readily available and delivered to the customer within hours. The repair process is faster, taking an average of 11 days for certain items at British Airways compared with the Navy’s 37-day process for a similar type of part. Table I.2 compares several key logistics performance measures of British Airways and the Navy. Key performance measure British Airways (1994) Navy (1995) In February 1996, we reported that Air Force had invested about $36.7 billion in aircraft parts. Of this amount, the Air Force estimated $20.4 billion, or 56 percent, was needed to support daily operations and war reserves, and the remaining $16.3 billion was divided among safety stock, other reserves, and excess inventory. These large inventory levels were driven in part by the slow logistics pipeline process. For example, one part we examined had an estimated repair cycle time of 117 days; it took British Airways only 12 days to repair a similar part. We reported that the complexity of the Air Force’s repair and distribution process creates as many as 12 different stopping points and several layers of inventory as parts move through the process. Parts can accumulate at each step in the process, which increases the total number of parts in the pipeline. Figure I.1 compares the Air Force’s pipeline times with British Airways’ times for a landing gear component. C. I. (Bud) Patton, Jr. Kenneth R. Knouse, Jr. Defense Inventory Management: Expanding Use of Best Practices for Hardware Items Can Reduce Logistics Costs (GAO/NSIAD-98-47, Jan. 20, 1998). Inventory Management: Greater Use of Best Practices Could Reduce DOD’s Logistics Costs (GAO/T-NSIAD-97-214, July 24, 1997). Inventory Management: The Army Could Reduce Logistics Costs for Aviation Parts by Adopting Best Practices (GAO/NSIAD-97-82, Apr. 15, 1997). Defense Inventory Management: Problems, Progress, and Additional Actions Needed (GAO/T-NSIAD-97-109 Mar. 20, 1997). Inventory Management: Adopting Best Practices Could Enhance Navy Efforts to Achieve Efficiencies and Savings (GAO/NSIAD-96-156, July 12, 1996). Best Management Practices: Reengineering the Air Force’s Logistics System Can Yield Substantial Savings (GAO/NSIAD-96-5, Feb. 21, 1996). Inventory Management: DOD Can Build on Progress in Using Best Practices to Achieve Substantial Savings (GAO/NSIAD-95-142, Aug. 4, 1995). Commercial Practices: DOD Could Reduce Electronics Inventories by Using Private Sector Techniques (GAO/NSIAD-94-110, June 29, 1994). Commercial Practices: Leading-Edge Practices Can Help DOD Better Manage Clothing and Textile Stocks (GAO/NSIAD-94-64, Apr. 13, 1994). Commercial Practices: DOD Could Save Millions by Reducing Maintenance and Repair Inventories (GAO/NSIAD-93-155, June 7, 1993). DOD Food Inventory: Using Private Sector Practices Can Reduce Costs and Eliminate Problems (GAO/NSIAD-93-110, June 4, 1993). DOD Medical Inventory: Reductions Can Be Made Through the Use of Commercial Practices (GAO/NSIAD-92-58, Dec. 5, 1991). Commercial Practices: Opportunities Exists to Reduce Aircraft Engine Support Costs (GAO/NSIAD-91-240, June 28, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reported on the feasibility of adding reparable parts to the list of consumable-type supplies and equipment covered by Section 395 of the National Defense Authorization Act of 1998, focusing on: (1) private-sector practices that streamline logistics operations; (2) Department of Defense (DOD) initiatives to improve its logistics systems; and (3) best practices that can be used to improve the military services' aircraft reparable parts pipeline. GAO noted that: (1) it is feasible for the list of items covered by section 395 to be expanded to include reparable parts; (2) in fact, all of the services and the Defense Logistics Agency (DLA) have initiatives under way designed to improve their logistics operations by adopting best practices; (3) however, if section 395 were expanded to include reparable parts, the responsibility for the development and submission of a schedule to implement best practices would also have to be expanded to include the military services, since responsibility for service-managed reparable parts is beyond the purview of the Director of DLA; (4) private-sector companies have developed new business strategies and practices that have cut costs and improved customer service by streamlining logistics operations; (5) the most successful improvement efforts included a combination of practices that are focused on improving the entire logistics pipeline--an approach known as supply-chain management; (6) the combination of practices that GAO has observed includes the use of highly accurate information systems, various methods to speed the flow of parts through the pipeline, and the shifting of certain logistics functions to suppliers and third parties; (7) DOD recognizes that it needs to make substantial improvements to its logistics systems; (8) the Army's Velocity Management program, the Navy's regionalization and direct delivery programs, and the Air Force's Lean Logistics initiative are designed to improve logistics operations and make logistics processes faster and more flexible; (9) although these initiatives have achieved some limited success, significant opportunities for improvement remain; (10) GAO's work indicates that best practices developed by private-sector companies are compatible with DOD improvement initiatives; and (11) however, GAO recognizes the use of these best practices must be accomplished within the existing legislative framework and regulatory requirements relating to defense logistics activities, such as the Office of Management and Budget Circular A-76. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Floods are the most frequent natural disasters in the United States, causing billions of dollars of damage annually. In 1968, Congress created NFIP to address the increasing cost of federal disaster assistance by providing flood insurance to property owners in flood-prone areas, where such insurance was either not available or prohibitively expensive. Since its inception, the NFIP has been a key component of the nation’s efforts to minimize or mitigate the financial impact of flood damage on property owners and limit federal expenditures after floods occur. Community participation is central to NFIP’s success. In order to participate in the program, communities must adopt and agree to enforce floodplain management regulations to reduce future flood damage. In exchange, NFIP makes federally backed flood insurance available to homeowners and other property owners (for example, farmers and other businesses) in these communities. As of May 2014, about 22,052 communities were participating in the program. Property owners can purchase flood insurance to cover both buildings and contents for residential and nonresidential properties. Insurable structures must have two or more outside rigid walls and a fully secured roof that is affixed to a permanent site. NFIP’s maximum coverage limit for residential policyholders is $250,000 for buildings and $100,000 for contents. For nonresidential policyholders, the maximum coverage is $500,000 for buildings and $500,000 for contents. Agricultural structures are considered nonresidential structures, so items such as grain stored in a bin or a tractor stored in a shed are covered by contents coverage. Policyholders purchase separate policies for each structure they insure. Deductibles range from $1,000 to $5,000 on residential structures and $1,000 to $50,000 on nonresidential structures. When NFIP was created, property owners were not required to buy flood insurance, so participation was voluntary. Congress amended the original law in 1973 to require some property owners to purchase flood insurance in certain circumstances (mandatory purchase requirement). The mandatory purchase requirement applies to owners of properties located in SFHAs in participating communities with mortgages held by federally regulated lenders or federal agency lenders, or who receive direct financial assistance for acquisition or construction purposes. Individuals in SFHAs who receive federal disaster assistance after September 23, 1994, for flood losses to real or personal property are also required to purchase and maintain flood insurance on the property as a condition for receiving future disaster assistance. The 2014 Act permits residential policyholders to forgo coverage for detached structures that do not serve as residences. The 1973 Act also added certain requirements that, according to FEMA officials, were intended to encourage community participation in NFIP. Specifically, communities are required to adopt and agree to enforce adequate floodplain management regulations as a condition of participation in NFIP. In exchange, flood insurance and certain federal disaster assistance will be made available to property owners in the community. Community ordinances or regulations must be consistent with NFIP’s minimum regulatory requirements, although communities may exceed the minimum criteria by adopting more comprehensive regulations. The following are some of the key NFIP building requirements and alternatives for new and substantially improved or substantially damaged structures located in riverine SFHAs. Elevation. All new and substantially improved or substantially damaged structures must be elevated to or above the base flood elevation (BFE). The BFE is the projected level that flood water is expected to reach or exceed during a flood with an estimated 1 percent chance of occurring in any given year. The flood depth— height at which structures should be built—is calculated by the difference between the BFE and ground elevations that is established by topographic surveys. Dry flood-proofing. Nonresidential structures, including agricultural structures, may be flood-proofed instead of elevated. Nonresidential structures that are dry flood-proofed are designed to be watertight below the BFE. Wet flood-proofing. FEMA also has guidance to allow communities to grant some categories of nonresidential structures, including certain agricultural structures, an exception from the requirement that certain structures be elevated or dry flood-proofed. This variance enables certain structures to be wet flood-proofed—applying permanent or contingent measures to a structure and/or its contents that prevent or provide resistance to damage from flooding by allowing flood waters to enter the structure. FEMA has instructed communities that variances may be issued for certain types of agricultural structures located in wide, expansive floodplains that are used solely for agricultural purposes, such as storage, harvesting, or drying. These types of structures include grain bins, corn cribs, general purpose barns open on at least one side, and buildings that store farm machinery and equipment. FEMA bases premium rates for NFIP policies on a property’s risk of flooding and several other factors. Specifically, FEMA uses location and property characteristics, such as flood zone designation, elevation of the property relative to the property’s BFE, building type (e.g., residential or nonresidential), number of floors, presence of a basement, and the year of construction relative to the year of a community’s original flood map. Additionally, FEMA uses data on prior claims, coverage amount, and policy deductible amount. NFIP has historically had two types of flood insurance premium rates: those that reflect the full risk of flooding to a property (full-risk rates) and those that do not. Properties that have not been charged property-specific full-risk rates have included those with grandfathered and subsidized rates. The largest number of subsidized policies has been for properties built before the initial flood insurance rate maps became available. The authority for subsidized rates was included in the National Flood Insurance Act of 1968 as an incentive to encourage participation in the program. In July 2012, Congress enacted the Biggert-Waters Act, which made significant changes to FEMA’s ability to charge subsidized rates. These changes phased out existing subsidies for certain types of properties through 25 percent annual premium increases until the full-risk rate is reached, including business properties, residential properties that are not a primary residence, properties that have experienced or sustained substantial damage exceeding 50 percent of fair market value or substantial improvement exceeding 30 percent of fair market value, and severe repetitive loss properties. For other properties, the Biggert-Waters Act raised the cap on annual premium rate increases from 10 percent to 20 percent, by risk class. The Biggert-Waters Act also prohibited subsidies from being extended for homes sold to new owners and removed them if properties were not covered or had a lapse in coverage after the date of enactment of the act as a result of the policyholders’ deliberate choice. However, the 2014 Act reinstated premium subsidies for properties that were purchased after July 6, 2012, and properties not insured as of July 6, 2012. It also generally limited annual increases in property-specific premium rates to 18 percent for policies not covered by the 25-percent increases by the Biggert-Waters Act, although it changed the substantial improvement threshold to 50 percent from the Biggert-Waters Act’s 30 percent. The 2014 Act does not remove the phase out for policies covering nonprimary residences, severe repetitive loss properties, and business properties, among others. The Biggert-Waters Act also generally prohibited the grandfathering of rates after future remapping and required any rate increases stemming from future remapping to be phased in over time. However, the 2014 Act eliminated the Biggert-Waters Act’s changes to grandfathering provisions, but included a provision which may prohibit grandfathering in limited situations. FEMA creates maps that show the degree of flood hazard so that properties in participating communities can be assigned actuarial premium rates—that is, rates that reflect the full risk of flooding—for insurance purposes. Flood maps, also show SFHAs for which communities must adopt and enforce building requirements as part of their NFIP participation. Lending institutions use flood maps to identify properties that are required to have flood insurance and to help ensure that the owners buy and maintain it. FEMA engineers create flood maps using statistical information such as data for river flow, storm tides, hydrologic/hydraulic analyses, and rainfall and topographic surveys. The results of the topographic and flood hazard analyses are combined and integrated into digital maps that depict floodplain boundaries and the projected height of the base flood—the flood level that has a 1 percent chance of being equaled or exceeded in any given year. NFIP establishes flood zone designations through its mapping process (see table 1). Areas designated as A, AE, V, or VE zones have a high risk of flooding and are considered SFHAs. Areas designated as V or VE zones are located along the coast and have an additional hazard associated with storm waves. Areas with a moderate to low risk of flooding are designated as B, C, or X zones. Areas where flood risk is possible but undetermined are designated as D zones. For the purpose of our study, we are considering areas with flood zone designations beginning with an A to be high-risk riverine floodplains. FEMA is required by statute to assess the need to revise and update all floodplain areas and flood risk zones at least every 5 years. The agency has undertaken two initiatives to update and modernize its flood maps. Until 2003, flood maps were created and stored in paper format. From 2003 to 2008, FEMA spent $1.2 billion to upgrade the nation’s flood maps to digital format as part of the Map Modernization initiative. Through this program, FEMA created digital flood maps for more than 92 percent of the population.Risk MAP—to improve the quality of data used in flood mapping. FEMA’s goals for the initiative include addressing gaps in flood hazard data; increasing public awareness of risk; and supporting mitigation planning by state, local, and tribal entities. In fiscal year 2009, FEMA began a 5-year initiative— Risk MAP’s primary areas of focus are coastal flood hazard areas, areas affected by levees, and significant riverine flood hazards. Risk MAP received $325 million in appropriations in fiscal year 2009, but appropriations have declined since, falling to about $216 million in fiscal year 2014. According to FEMA officials, FEMA prioritizes its mapping projects based on needs and risk and balances them with available funding. Need is determined by assessing current flood data and changes since the last update. Risk is assessed largely by population and the number of structures and their exposure to flood hazards. While rural and agricultural areas may have needs identified, they are generally low risk and thus may not be a high priority for map updates. FEMA officials, low-risk areas are more likely to receive approximated mapping studies than detailed mapping studies. Approximated mapping studies are not based on the same quality or quantity of data as are detailed studies. Maps made using approximated studies also do not show the BFE. This may require that communities or property owners in those areas obtain a BFE from local or state officials, developers, or other organizations. They may also develop their own BFE by hiring an engineer or surveyor or using guidance provided by FEMA, according to FEMA officials. However, according to FEMA officials, some rural or agricultural areas would have been a part of these mapping efforts because for Risk MAP, FEMA maps on a watershed basis, which is a large area of land that may include both populated and unpopulated areas. accredited levee. In order to have a levee accredited, the owners or community officials must demonstrate that the levee system provides adequate flood protection and has been adequately maintained by submitting an engineering certification indicating that the levee complies with established criteria. If a levee receives accreditation, property owners in the area it protects may not be subject to the mandatory purchase requirement if the area is not mapped as an SFHA. In some cases, areas behind accredited levees are still prone to flooding due to a lack of interior drainage or flooding from other sources and will therefore still be mapped as an SFHA, resulting in the property owners behind that levee still being required to purchase flood insurance. Because FEMA does not identify whether floodplains are in urban or rural areas for the purposes of administering NFIP, we used available data to estimate the location of rural communities and agricultural areas in riverine SFHAs. We defined rural areas as areas that are not considered urbanized areas or urban clusters using U.S. Census Bureau data. We defined agricultural areas as those counties with 50 percent or more of their land areas used in agriculture, according to USDA’s Atlas of Rural and Small-Town America. Figure 2 shows the location of riverine SFHAs according to FEMA’s flood map data in the areas we defined as agricultural areas and rural communities. Our analysis of FEMA data showed that the population mapped in rural and agricultural SFHAs stayed about the same during FEMA’s Map Modernization initiative, though certain areas saw increases or decreases.increased by 0.11 percent through Map Modernization, while the population in urban SFHAs decreased by 0.8 percent. Based on interviews with floodplain management officials, farmers, and others in selected communities, the effects of NFIP’s building requirements for agricultural structures have generally varied. To comply with these requirements, new or substantially improved nonresidential structures in high-risk areas must be elevated or dry flood-proofed. FEMA guidance issued in 1993 noted that communities could allow wet flood- proofing that permits water to flow through a structure, for some nonresidential structures, including certain types of agricultural structures located in vast, expansive floodplains. However, the agency acknowledged that the methods included in the guidance do not cover all of the different types of agricultural structures located in vast flood plains with deep flood depths and may not reflect the changes in the size and scale of farm operations in recent years. Without additional guidance from FEMA, farmers may face challenges in effectively complying with its building requirements. We found that the effects of NFIP building requirements varied in selected communities and the requirements negatively affected certain farmers who were located in vast floodplains with relatively deep flood depths. We selected eight geographically diverse locations in SFHA riverine floodplains in California, Louisiana, North Carolina, and North Dakota that supported crops or livestock requiring onsite agricultural structures.Representatives from FEMA, USDA, and national floodplain management and farm organizations told us that they were unaware of any farmers in these states or others that faced negative effects on their operations from the NFIP building requirements (e.g., elevation, dry flood-proofing, or wet flood-proofing for certain nonresidential structures). State and local floodplain managers we spoke with from Louisiana, North Carolina, and North Dakota also said that they were not aware of any widespread concerns that farmers were having with NFIP’s building requirements or of any negative effects the requirements might be having on agricultural expansion. Correspondingly, 12 farmers in the communities we selected concurred with these views and generally told us that they had not been adversely affected by NFIP building requirements. However, state and local floodplain managers we spoke with from California said that some farmers in their state had been negatively affected by the requirements. The California state floodplain manager told us that the affected farmers typically lived and operated in agricultural areas behind levee systems that trapped water and had deep flood depths—up to 15 feet in some areas, compared with 1 to 6 feet in other states. The deep flood depths make it difficult for the farmers to build new structures in accordance with NFIP requirements because of the cost and complexity of elevating and dry or wet proofing the new structures. This challenge is especially difficult in several counties along the lower Sacramento River, including Sutter and Yolo Counties where building requirements had affected farmers’ ability to expand or rebuild agricultural structures, according to the California state floodplain manager. In addition, representatives of an agricultural floodplain management group whose members are primarily from California’s Central Valley said that farmers they represented were concerned about the financial and technical feasibility of elevating or flood-proofing some agricultural structures to meet NFIP’s building requirements. The 11 farmers we spoke to in these two communities shared these concerns and told us that they had experienced similar negative effects due to the NFIP building requirements. Two key factors may partly explain the differing views of farmers in California as compared to those in the other selected rural and agricultural communities regarding the effects of NFIP building requirements. First, SFHAs in the two California communities have greatly increased in size in recent years compared to the other communities (see fig. 3). According to FEMA, the increase was mainly a result of areas behind unaccredited levees at risk of flooding being remapped into SFHAs. Second, the requirement to elevate or dry flood- proof structures above the BFE is harder to meet in the California communities because the flood depth is up to 15 feet in certain areas, compared to the other selected communities in North Dakota and Louisiana whose flood depths range from 1 to 6 feet. Farmers in Louisiana, North Carolina, and North Dakota generally have been able to expand their operations in areas outside of SFHAs. For example, local floodplain managers in Duplin and Tyrrell Counties (North Carolina) told us that huge livestock processing plants were usually built outside of SFHAs after Hurricane Floyd in 1999 destroyed millions of livestock in the state. Because of the severe damage from this hurricane, the state encouraged farmers to build their agricultural structures outside of SFHAs whenever possible. In addition, according to some farmers we spoke to in the selected Louisiana communities, at least a portion of their farmland was in non-SFHA areas, and they built or expanded their agricultural structures in those areas. As a result, they were not required to comply with the NFIP building requirements because those structures were not built in SFHAs. Further, four farmers in the Louisiana communities told us that they generally built their agricultural structures at the highest points on their farms, areas that were outside the SFHA. Updated levee analysis can result in levee de-accreditation—that is, a determination that a levee no longer meets federal design, construction, maintenance, and operation standards to provide protection from a major flood. Subsequently, areas behind the levees can be remapped into SFHAs. See 44 C.F.R. §§ 65.10, 65.14. process crops that far from the harvest area (which lay inside the SFHAs) because the walnuts could be damaged during transport. We also found that the California farmers from our selected communities experienced greater challenges in relation to elevating structures than farmers in other areas. Local floodplain managers from the selected communities in Louisiana, North Carolina, and North Dakota told us farmers in their communities typically needed to raise building foundations by just a few feet (which they were generally able to do by adding fill dirt) to meet the BFE requirements for structures built inside SFHAs. Farmers we spoke to also concurred with these views. For example, a farmer from Louisiana’s St. Landry Parish who grows rice and soybeans and raises crawfish told us that although most of his structures were outside of the SFHA, he took precautionary steps to elevate them all—those outside it as well as those within it—by at least 2 feet based on his experience with regular flooding in the past and estimated future flooding trends. However, in both Sutter and Yolo Counties in California, the flood depths were relatively deeper (up to 15 feet in some areas). The Sutter County floodplain manager explained that elevating a structure 3 or more feet could require a base, or building pad, that occupied much more square footage than the structure. It could require additional land to build a slope that was not too steep to allow access to the structure. A slope that was too steep could present an obstacle for truck and equipment movement, making it impractical to conduct business. Further, 7 farmers there told us that it was technically difficult and cost prohibitive to elevate structures to the required height. According to state and local floodplain managers and farmers we spoke with, farmers in Sutter and Yolo Counties who were subject to the NFIP building requirements were also facing challenges flood-proofing their new or substantially expanded agricultural structures to comply with NFIP building requirements. FEMA allows new, substantially improved, or substantially damaged nonresidential structures, including agricultural structures, to be dry flood-proofed (made watertight below the BFE). However, according to FEMA guidance, dry flood-proofing is often feasible only when the flood depth is less than around 3 feet, because deeper flood depths produce pressure on structures that may crack the walls or cause them to collapse. In addition, a local floodplain manager and a farmer told us that, regardless of the flood depth, it would be difficult to dry flood-proof structures used for rice and fruit drying because these buildings needed large openings for fan exhausts to dry the crops and prevent moisture from spoiling them (see fig. 4). FEMA has provided guidance on wet flood-proofing as an alternative to elevation and dry flood-proofing for certain nonresidential structures, including agricultural structures, but officials recognize that this guidance still may not be sufficient for assisting farmers in riverine floodplains with deep flood depths. Realizing the need to provide alternative methods to meet building requirements after a catastrophic flood in the Midwest in 1993, in the same year, FEMA issued guidance that allowed certain structures that cannot be elevated or dry flood-proofed to be wet flood- proofed, allowing water to flow through a building while minimizing damage to the structure and its contents.may not be viable for certain agricultural structures. For example, according to Sutter County’s floodplain manager, USDA and the Food and Drug Administration have requirements for the water-tight storage of certain farm products, making wet flood-proofing not a viable option. The walnut farmer from Sutter County that we spoke to further explained that as a result of these requirements, he had to seal the structure to prevent cross-contamination of different crops, something that is important for allergy sufferers. Another farmer told us that if water could get into openings, so could pests that would damage crops. Further, crops such as rice would be ruined if moisture enters the structure. Furthermore, FEMA’s current guidance does not take into account important changes to the agricultural industry that have occurred in recent years. According to FEMA and USDA officials, the agricultural industry has become more consolidated, which has greatly increased the size and scale of farm operations. For example, supporting agricultural structures are now much more expensive to build and replace and may represent unique challenges not envisioned in the existing guidance. Such changes in the agricultural industry underscore the need for FEMA to periodically update and provide additional guidance that reflects current conditions. The absence of current guidance on alternative methods has led some farmers to “work around” the building requirements. Six farmers we interviewed in Yolo and Sutter Counties in California told us that they worked around the building requirements while trying to expand their businesses. Two farmers in these communities told us that they had quickly built their facilities before flood map revisions placed their farms in SFHAs. A nursery farmer in Sutter County built a laboratory in an existing warehouse to avoid building a separate structure, although he lost the warehouse function. Three of the farmers said that instead of building new structures, they were careful to make incremental additions or repairs that were below NFIP’s substantial improvement threshold. Two of the farmers also told us that, rather than building anything separately, they attached every expansion to an existing structure, thus sacrificing space for loading and unloading. Because it is costly, or, in certain circumstances, not technically feasible to comply with current NFIP building requirements, some farmers in our selected California communities were concerned about future expansion after recent map updates. Three farmers cited the importance of agriculture to the local economy and said that agriculture was the best use for floodplains.However, these workarounds may not fully address the long-term expansion needs of these farmers, and more importantly, the workarounds may ultimately defeat the purpose of the NFIP building requirements because they may increase the risks of flood damage to the structures. FEMA officials stated that it is their practice to update technical guidance as needed and recognized that the challenges some farmers faced in expanding or building agricultural structures in SFHAs might call for additional approaches for complying with NFIP building requirements. Officials explained that FEMA has not updated the guidance for wet flood- proofing in over 20 years because the agency thought the guidance covered the types of agricultural structures that could be feasibly wet flood-proofed. However, FEMA has identified the need for better ways to protect structures, especially in wide, expansive floodplains where flood depths may range from a few feet to 20 feet or more in depth. In particular, FEMA officials said they would like to further evaluate the vulnerability of structures and their contents to flood hazards and identify how mitigation measures, such as elevation, dry and wet flood-proofing, and other measures could be used to minimize flood damage. FEMA also plans to solicit input from structure manufacturers and from farmers. FEMA officials told us that they intend to begin updating all technical bulletins, including the 1993 bulletin, in the next 18 months; however, they are at a preliminary stage and have not yet identified resources for such a study or determined its scope and time frames for completion. In addition, FEMA officials told us that, although a recent statutory mandate in the 2014 Act for providing new guidelines on alternatives to elevation is specifically required for residential structures, they plan to issue broader guidance that could apply to nonresidential structures as well. Without updating and providing additional guidance, FEMA is missing an opportunity to help farmers who face challenges in effectively complying with its building requirements, especially if more agricultural production areas are remapped into SFHAs. Such guidance may not only be needed by farmers in the selected communities in California that we reviewed, but also in other similar agricultural areas across the country. Specifically, FEMA officials noted that there are other agricultural areas in vast riverine floodplains with deep flood depths across the country— some up to 37 feet—including Southwest Illinois, Northeast Arkansas, Southwest Mississippi, Southeast North Carolina, and Northwestern Missouri. Some stakeholders from selected communities stated that NFIP’s building requirements in SFHAs could contribute to the long-term economic decline of some small towns in rural areas. The local floodplain manager from Yolo County told us that in addition to difficulties in building and expanding agricultural structures, demand for farm worker housing is strong, and the requirement that new or substantially improved homes be elevated up to or above the BFE, which can be up to 15 feet, adds significantly to the already high price of housing. The floodplain manager stated that NFIP building restrictions that make it infeasible to build or expand agricultural structures, including farm worker housing, could reduce both the tax base and the economic stability of the county by driving agricultural businesses elsewhere. However, according to FEMA, the current building requirements are effective in reducing flood-related damage and the loss of life because of specific requirements, such as elevation. Further, according to FEMA, properties that adhere to building requirements sustain less damage and as a result, may have lower insurance premiums, which in turn could make insurance rates more affordable and attract broader participation in the program. Farmers and rural residents we interviewed in Yolo County expressed similar concerns about the economic viability of their communities. For example, one farmer told us that a small nearby town that had been remapped into an SFHA would likely have trouble attracting viable businesses to keep the community thriving, because the building restrictions meant that businesses could only take over existing structures. Some residents of Yolo County also told us their fire station needed a new roof, which would have been considered a substantial improvement because its cost would have exceeded 50 percent of what the building was worth. However, according to the residents, the county had not allowed permits for any new buildings or substantial improvements to existing buildings since the 2012 map update because FEMA had not designated the BFE for the community. For these reasons, and because undertaking a substantial improvement would have meant elevating or dry flood-proofing the fire station, the town had to do minimal repairs, keeping the costs under the substantial improvements threshold. The mandatory purchase requirement and premium changes resulting from remapping and the elimination of subsidies and grandfathered rates appear to have affected rural home markets more than they have farming operations. For example, some homes affected by these changes might have lost value and become harder to sell and some development has been halted according to some state and local floodplain managers, rural residents, and developers we spoke with. Further, farmers often did not need to buy flood insurance on some structures because they were able to provide their own financing or take other measures, such as obtaining a loan only on land without structures. The mandatory purchase requirement and potential premium rate increases associated with recent map updates and, in some cases, legislative changes to NFIP, are likely to affect the residential real estate markets in rural areas more than the farming operations in those same areas, according to state floodplain managers and other stakeholders in our selected communities. Representatives from national farm organizations were unaware of any effects of the mandatory purchase requirement on farmers; and local floodplain managers, agricultural lenders, and 12 farmers we spoke with in the selected communities generally agreed that mandatory purchase requirements had not affected agricultural land values. However, all of the state floodplain managers with whom we spoke had heard concerns about the effects on the rural residential real estate market of increased rates resulting from the elimination of some subsidies and grandfathering provisions. In addition, some local floodplain managers, agricultural lenders, and five farmers we spoke with expect that being mapped into an SFHA would have a negative impact on the value of residential housing in certain communities either now or in the future. For instance, one agricultural lender in both selected communities in Louisiana said that being mapped into an SFHA would decrease the value of residential homes on the market in rural communities because of the increased cost of flood insurance premiums. Also, a resident with whom we spoke who lived in a rural part of Louisiana’s Rapides Parish said that being mapped into an SFHA had reduced the value of his house and made it more difficult to sell, because prospective buyers would see it as prone to flooding. Similarly, in Walsh County, North Dakota, three residents told us that the requirement to buy flood insurance and the rate increases seen in their community after the SFHA was expanded in a 2012 map update had nearly halted the residential real estate market in their community. One resident said that he had tried to move but could not, because potential buyers walked away when they realized his home was in an SFHA. Some concerns were also raised about the overall affordability of NFIP insurance for homeowners mapped into SFHAs. Representatives of the Property Casualty Insurers Association of America told us that remapping would likely cause some affordability concerns as more areas were moved into high-risk zones. However, they noted that remapping would likely not impact residents of rural areas any differently than it would remapped residents in urban areas. Similarly, two residents of Walsh County, North Dakota, told us that the rate increases associated with their recent map change had made it hard for them to now afford to live in their homes. Concerns were also raised about the affordability of insurance premiums and the impact on the housing market once the phasing out of subsidized rates established in the Biggert-Waters Act and the elimination of grandfathering provisions began, but some of these concerns may no longer be relevant, because the 2014 Act amended sections of the Biggert-Waters Act that would have resulted in rate increases for some residential policyholders. At the same time, local floodplain managers and residents of some selected communities said that NFIP insurance requirements associated with being in an SFHA could lead to positive outcomes for rural towns, including more mitigation actions and less development in the floodplain. For instance, the local floodplain manager of Duplin County, North Carolina, said that the few homeowners in the SFHA who had not elevated their homes would probably choose to do so, since mitigation actions could lower premium rates. Similarly, a resident of Walsh County, North Dakota who was concerned about rate increases after being mapped into an SFHA, said that he and some of his neighbors had already elevated their homes above the BFE or were considering elevating them. In addition, the local floodplain managers from Sutter County, California, and Duplin County, North Carolina, both stated that inhibiting development in SFHAs could help manage the adverse impacts of floods and help meet one of FEMA’s goals of mitigation. We heard about areas in most of our selected communities where development had begun prior to a map update but was halted when the areas were remapped into SFHAs. For example, in Yolo County, California, and St. Landry Parish, Louisiana, we visited developments that had been partially built before being remapped into SFHAs. The developers in both areas said that the elevation requirements and probable decline in the value of the homes because of the flood insurance requirements would make further development economically infeasible. In both cases, the developers were not sure what would happen to the undeveloped land. We also heard from local floodplain managers in Duplin County, North Carolina, and Yolo and Sutter Counties in California that being mapped into an SFHA had halted development in parts of their counties. While the lack of development in SFHAs may be beneficial for floodplain management, the local floodplain managers and other stakeholders in Yolo and Sutter Counties in California noted the possible negative effects of being remapped into SFHAs—including changes in building requirements and insurance costs—on residents of small rural towns. As with building requirements, members of the selected communities said that insurance costs associated with being remapped into an SFHA could contribute to the long-term economic decline of some small towns. For instance, the local floodplain manager in Yolo County, California, told us that the town with the unfinished development that we discussed previously would probably enter a long, slow decline, in part because of recent changes in building requirements and insurance costs resulting from being remapped into an SFHA. He added that not only was it no longer economically feasible to develop certain areas within the town’s borders, but also most of the town’s inhabitants were farm workers who could not afford flood insurance for their houses. However, he said that NFIP requirements were only one factor that was impacting the economic future of this town. In addition, he noted that changes to building requirements and insurance costs resulting from being remapped into an SFHA would not impact all small towns in the same way and that other towns in the community would prosper despite being remapped into SFHAs. An agricultural lender we spoke with in Yolo County agreed that being remapped into SFHAs could have long-term economic impacts on rural towns that depended on the agricultural economy, because farm businesses that were already operating on thin profit margins could be hurt by the additional cost of flood insurance. This is because farmers must accept the market price for their crops, and therefore it may be difficult to pass the price of flood insurance on to their customers, according to one farmer and one lender we spoke with in California. In addition, the local floodplain manager in Sutter County said that some small businesses that supported agriculture, such as a local tractor dealership, had already seen premium rate increases due to the Biggert- Waters Act eliminating their subsidies. He believed that some of these small businesses would have to close because they would not be able to afford the full-risk rates for business structures. Like NFIP’s building requirements, the mandatory purchase requirement and changes in flood insurance premiums have had limited effects on farmers we spoke to in the selected communities, except some in California. Many of those we spoke with—including FEMA and USDA officials, representatives of national farming organizations and a floodplain management organization, all state floodplain managers, and one insurance industry organization—were not aware of farm businesses that had been adversely impacted by flood insurance costs. However, representatives of an agricultural floodplain management group, whose members were primarily from California’s Central Valley, said that its members were concerned that the cost of flood insurance on their structures in areas that had recently been remapped into SFHAs could make their businesses unsustainable. For example, according to a rice farmer in California, recent mapping updates placed his structures in an SFHA, raising his flood insurance premiums substantially. He said that his flood insurance premiums were now his third largest production expense. Three farmers in Yolo and Sutter Counties and the local floodplain manager in Sutter County were also concerned about rate increases they expected in the next year as NFIP moved toward full-risk rates. However, six farmers we spoke with in the California communities told us that their flood insurance premiums were a very small portion of their total production cost. In addition, some of the farmers from these communities chose to purchase flood insurance even though they were not required to do so and considered it another cost of doing business. According to state floodplain managers for most of the selected communities, many farmers were not required to insure their structures, for varying reasons. For instance, In the two Louisiana communities we reviewed, all but one of the farmers with whom we spoke had farm structures only on parts of their land that lay outside SFHAs. None of these farmers voluntarily purchased flood insurance on these structures. In North Carolina, the floodplain manager said that many farms in the state were sponsored by large corporations that funded the construction of any necessary structures, and as a result farmers did not need loans that might include a mandatory purchase requirement. In contrast, the floodplain manager from California said that institutions that provided loans to farmers for structures, such as rice or prune dryers, might require flood insurance as a condition of the loan, even if they were not required to do so. Among other requirements, buildings with two or more outside rigid walls and a fully secured roof that are affixed to a permanent site are considered insurable structures, according to NFIP regulations. 44 C.F.R. § 59.1. delayed planting a new crop because he lacked the cash to do so and did not want to take out a loan because he would have had to purchase flood insurance. He said that he expected it would take him 2 years to raise the needed money. Also, almost all (five of six) of the agricultural lenders with whom we spoke had concerns about requiring farmers to purchase flood insurance on farm structures that had little or no value, such as dilapidated sheds or chicken coops. These lenders told us that this issue was their most significant concern in implementing the mandatory purchase requirement for farm loans. These structures often provide little to no economic value to farmers, and lenders said that they would not require insurance on them in the absence of the mandatory purchase requirement because they did not need to use the structures as collateral. Two of the lenders told us that they had lost business because of this requirement. Further, one lender told us that it was difficult to determine the replacement value of a building that the appraiser valued at zero or in some cases did not even include in the appraisal. One lender told us that in these situations their loan officers worked with the farmers to exclude the structures from the mortgage to avoid the mandatory purchase requirement. Local floodplain managers, farmers and lenders identified several options to help farmers located in SFHAs manage NFIP requirements for building new or substantially improved structures and lowering the cost of NFIP insurance. The most commonly cited option involved exempting agricultural structures from NFIP building requirements and the mandatory purchase requirement. Other options included charging insurance premiums based on an area’s historical flood losses, accounting for some level of protection by certain unaccredited levees, providing need-based assistance to farmers and rural residents, and increasing funding for mitigation efforts. However, FEMA officials, experts from national floodplain management and city and regional planning organizations, and academics told us that many of these options carried risks and may run counter to the NFIP objectives. Exempt Agricultural Structures. The most commonly cited option from farmers and local lenders, mainly from California and Louisiana, involved exempting new agricultural structures and those that needed substantial improvements from NFIP building requirements and the mandatory purchase requirement. Legislation has been proposed to amend NFIP to include relaxing NFIP requirements for some agricultural structures, including the Agricultural Structures Building Act of 2013, which aims to allow farmers to repair, expand, and construct agricultural structures without elevation in SFHAs. In addition, one group has advocated the creation of a separate agricultural zone that would not require expensive elevation and dry flood-proofing but would require wet flood-proofing of certain structures. Some farmers from Sutter and Yolo Counties in California told us that they did not believe that the flood risk for their areas was high, since these counties have not experienced a major flood since the 1950s. The farmers have said that they would be willing to assume all risks and opt out of federal disaster relief if they could expand and construct buildings without being required to follow NFIP building requirements. However, experts from national floodplain management organizations and academics told us that such exemptions were counter to the objectives of NFIP and carried significant risks. For example, one expert indicated that it might be difficult to differentiate agricultural structures from other nonresidential structures that may also store agricultural products (e.g., a corner store or a large industrial facility that may also store grain in an adjacent warehouse). He said that the tendency would be to classify any structures that could be remotely related to agriculture as agricultural structures. Further, experts we spoke to indicated that such an exemption could set a precedent, leading others to ask for similar exemptions. FEMA officials shared these views, adding that FEMA had no legal authority to allow farmers or any other specific population group to opt out of disaster relief. According to FEMA officials, allowing farmers to assume all risks and not receive disaster relief would require further legislative changes to the Stafford Disaster Relief and Emergency Assistance Act. Furthermore, one of the primary goals of FEMA’s building requirements is to help reduce flood-related property damage. Complying with FEMA’s building requirements would reduce flood-related losses and lower insurance premiums for compliant structures, according to FEMA officials. They added that this reduction in turn may help attract broader participation in the program. Exempting structures may defeat this goal and encourage farmers to build noncompliant structures in high-risk areas that may inadvertently cause damage to nearby communities, according to officials. For example, agricultural structures that do not adhere to building requirements—that is, that are not elevated or flood-proofed— could be washed downstream, creating blockages that could cause additional flooding in communities there. Both FEMA and the experts told us that while farmers might view their choices as affecting only themselves, flood mitigation needed to be considered holistically from the perspective of risks to the larger community. Further, experts indicated that exempting structures may reinforce farmers’ potential misperceptions of their flood risks. Charge Insurance Premiums Based on Historical Losses to Flooding. Some farmers, rural residents, state and local floodplain managers, and other organizations have suggested creating a variable premium rate structure based on historical flood risks in different areas. For example, some farmers from California told us that they should pay lower flood insurance premiums than others residing in areas that the farmers consider to be more flood-prone areas, such as coastal areas, as these farmers had not experienced flooding since the 1950s and did not perceive their flood risks as significant. However, according to FEMA the premium rates are determined by flood zone, among other factors, and policyholders in high-risk coastal areas (V zones) already pay higher rates than policy holders in other zones. Further, FEMA stated that flood maps already account for historical floods, in addition to other factors. According to the national floodplain management expert we spoke with, some states that had so far collected less in claims from NFIP than other states might welcome this option. But they also noted that people tended to underestimate their long-term flood risks. Exempt Low-Value Agricultural Structures. As mentioned earlier, lenders from four of the selected communities suggested giving them the flexibility to decide whether a farmer needed flood insurance on low-value agricultural structures. Some lenders told us that they did not need to use the low-value structures as collateral. Experts indicated that this option could be further explored, provided that independent third parties appraised the structures and confirmed their values. FEMA officials also noted that federal financial regulators, not the agency, set the standards for insurance requirements for low-value structures and that FEMA did not have the authority to dictate to lenders what they could do. According to FEMA, in some instances lenders may require insurance even though it may not be required under the law. Therefore, farmers may face the prospect of paying for flood insurance coverage on properties that have low value. Account for Some Protection Provided by Unaccredited Levees. According to a floodplain manager from Sutter County, California, and others, unaccredited levees still provide some protection and insurance premiums should reflect this fact. The experts we spoke with said that this option would help adjust insurance rates and provide more flexibility for policyholders in adhering to NFIP building requirements and mandatory purchase requirements. FEMA recognizes that unaccredited levee systems may still provide some measure of protection against flooding and has developed Levee Analysis and Mapping Procedures (LAMP) to account more precisely for the level of protection levees provide when mapping flood risk. LAMP’s goal is not to reduce insurance rates but to use the best scientific methodologies to more accurately determine flood risks and help ensure that premiums are based on the most accurate determination of flood risk. For example, LAMP may determine that an area around the levee should be in zone D (a non-SFHA area with undetermined risks). The levee may still technically not be accredited, but structures located in zone D have no mandatory purchase requirement or building requirements because it is not considered as SFHA. Policyholders in this zone would not be required by law to purchase insurance, but FEMA strongly advises that they do. However, some experts said that determining the safety of levees was difficult. FEMA officials noted that while LAMP allowed for a more detailed analysis of unaccredited levees, this analysis might not always result in lower BFEs, smaller SFHAs, or reduced NFIP premiums. FEMA and other experts emphasized that levees were never 100 percent safe and that communities needed to acknowledge the possibility that any levee— including those that are accredited to provide protection for a 1 percent annual event—could fail. Provide Need-Based Assistance. Some farmers also cited need-based assistance as an option to help those who could not afford NFIP premiums to meet the insurance requirements. In general, stakeholders agreed that this option warranted further exploration, since flood insurance has been an affordability issue for many people. We have previously identified targeted assistance or subsidies based on financial need of policyholders as an option to consider to reduce the financial impact of subsidies on NFIP. See GAO, Flood Insurance: More Information Needed on Subsidized Properties, GAO-13-607 (Washington D.C.: July 3, 2013). the agency currently does not have the statutory authority or resources to provide need-based and targeted assistance to help property owners with NFIP insurance premiums. As required by the Biggert-Waters Act and the 2014 Act, the National Academy of Sciences is studying the issue of affordability but has not yet produced its report. FEMA officials said that it would be premature to comment on how need-based assistance might operate. FEMA supports a variety of flood mitigation activities that are designed to reduce the risk of flood damage and the financial exposure of NFIP. These activities, which are mostly implemented at the state and local levels, include hazard mitigation planning; the adoption and enforcement of floodplain management regulations and building codes; and the use of hazard control structures such as levees, dams, and floodwalls or natural protective features such as wetlands and dunes. Additionally, property-level mitigation options include elevating a building to or above the area’s base flood elevation, relocating the building to an area of less flood risk, or purchasing and demolishing the building and turning the property into green space. large communities with high population densities, according to FEMA officials. The officials indicated that in general, agricultural areas and rural communities may be unlikely to meet these criteria and thus may have difficulty obtaining mitigation funding. A number of rural and agricultural areas have recently been mapped into SFHAs. Farmers with new or substantially improved structures in these areas must now comply with NFIP building requirements, and farmers in some locales—specifically counties that we visited in California—face challenges meeting them. Based on information from FEMA, complying with NFIP’s building requirements may be a broader problem applicable to agricultural communities that have vast floodplains with deep flood depths similar to those in California. The two options of complying with the program’s building requirements—elevating and dry flood-proofing— are not always feasible for certain structures in these types of locations. For example, farmers in areas with deep flood depths cannot realistically elevate large structures to meet FEMA requirements and may not be able to dry flood-proof all structures. With regard to wet flood-proofing for some nonresidential structures, including certain agricultural structures, FEMA last updated its guidance for granting such variances in 1993. Although FEMA typically updates guidance as needed and acknowledges the challenges some farmers face, it has not updated its guidance with alternatives for complying with building requirements in over 20 years, or expanded it to reflect changes in the agricultural industry. Updated and detailed guidance that provides alternative mitigation methods for protecting agricultural structures from flooding and takes into account relevant changes to the agricultural industry would be an important step in assisting farmers in identifying feasible alternatives to complying with building requirements in expansive floodplains with deep flood depths. As FEMA determines the scope of its efforts to revise its existing guidance, we recommend that the Secretary of the Department of Homeland Security (DHS) direct the Administrator of FEMA to update existing guidance to include additional information on and options for mitigating the risk of flood damage to agricultural structures to reflect recent farming developments and structural needs in vast and deep floodplains. We provided a draft of this report to the Department of Homeland Security (DHS) for its review and comment. DHS provided written comments that are presented in appendix IV. In its comments, DHS concurred with our recommendation to update existing guidance to include additional information on and options for mitigating the risk of flood damage to agricultural structures to reflect recent farming developments and structural needs in vast, deep floodplains. In particular, the letter noted that FEMA recognizes that agriculture is a good use of the floodplain. Further, changes in the agricultural industry and the diversity of agricultural structures are important to recognize in future guidance. FEMA stated that it is working to determine the best approach to update its guidance, but has not yet determined a completion date. FEMA also provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to FEMA and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report discusses (1) the effects on farmers and rural residents of the National Flood Insurance Program’s (NFIP) building requirements for agricultural and residential structures, (2) the effects of the mandatory purchase requirement and changes in premium rates, and (3) options that have been proposed to address any issues resulting from changes to NFIP requirements and stakeholders’ views on these proposals. We focused our review on riverine rural and agricultural floodplains and excluded coastal areas. For all objectives, we analyzed relevant laws, as well as Federal Emergency Management Agency (FEMA) regulations and policies, including building requirements for properties located in special flood hazard areas (SFHA), flood mapping modernization efforts, and the analysis and mapping procedures for unaccredited levees. statutory requirements such as the mandatory purchase requirement for properties located in SFHAs. We reviewed the Biggert-Waters Flood Insurance Reform Act of 2012 (the Biggert-Waters Act), including provisions to phase out some premium subsidies. We also reviewed provisions of the Homeowner Flood Insurance Affordability Act of 2014 (2014 Act) that repealed or altered portions of the Biggert-Waters Act. We identified and reviewed research on the effects of NFIP requirements on farmers and rural residents. Levees are man-made structures, usually earthen embankments, designed and constructed in accordance with sound engineering practices to contain, control, or divert the flow of water to provide protection from temporary flooding. 44 C.F.R. § 59.1. Levees that are accredited by FEMA can result in a community being mapped in a flood zone with a lower risk than it would be without the accredited levee. regional planning organizations (i.e., American Planning Association, Association of State Floodplain Managers, and National Association of Flood & Stormwater Management Agencies). We interviewed academics in the areas of floodplain management, officials from FEMA’s Mapping, Insurance, Building Science and Flood Management Branches, and officials from Department of Agriculture’s (USDA) Economic Research Service and Rural Development branches. In addition, we interviewed representatives of Agricultural Floodplain Management Alliance (AFMA) members primarily in California, and the insurance industry. To identify the locations of rural and agricultural areas in SFHAs, we distinguished rural and agricultural land areas from urban land areas. FEMA does not make such a distinction for the purposes of administering NFIP. To make these distinctions, we first analyzed data from the U.S. Census Bureau (2010) and USDA’s Atlas of Rural and Small Town America (2007) to determine the rural and agricultural areas within the United States. We defined rural areas as areas that were not considered urbanized areas or urban clusters using Census data and agricultural areas as counties where 50 percent or more of the land area was used for farming. We considered all other areas as urban (see fig. 5). We reviewed information available online from the Census web site and the USDA web site on the data quality assurance processes for the data. We concluded that the Census and USDA data that we used were sufficiently reliable for purposes of using them as a base for this determination. We provided FEMA the data on rural and agricultural areas described above. FEMA mapping specialists used the data we provided them and combined it with FEMA’s flood map data. For the rural and agricultural areas with maps that had been converted to a digital format as of February 2014, FEMA mapped the SFHAs. For the rural and agricultural areas that had flood maps that had not yet been converted to digital format as of February 2014, FEMA showed these areas on the map. FEMA excluded areas with coastal flood zones from the map. To determine the number and percentage of policyholders located in rural and agricultural riverine SFHAs, we determined which ZIP codes were in the rural, agricultural, and urban areas. If 50 percent or more of land area of a ZIP code was within a rural or agricultural area, we considered it a rural or agricultural ZIP code. We analyzed FEMA’s policy data as of September 30, 2013, (most recently available fiscal year-end data), to determine how many policies were zoned in an SFHA in the ZIP codes we deemed rural or agricultural using the method described above. We excluded policies with a coastal flood zone designation because the scope of this study was on riverine flooding. To determine the percentage of the population mapped into or out of SFHAs because of FEMA’s Map Modernization initiative, we analyzed available FEMA data on the number of people that received a map change at the Census Block Group level under this initiative. We determined which Census Block Groups were in rural and agricultural ZIP codes and compared the number of people that received a change in SFHA designation in those Census Block Groups to population data from the 2010 Census, which was also provided by FEMA. We reviewed documentation on how the data were collected and interviewed a FEMA official on the usability of the data. We determined these data were sufficiently reliable for our purposes. To assess any effects of NFIP’s building requirements and the mandatory purchase requirement on farmers and rural residents, we conducted case studies in eight selected NFIP communities. We selected these communities using the following criteria: crop and livestock production requiring nonresidential farm structures or nearby on-farm processing (e.g., rice, corn, soybeans, cotton, sugar beets, hogs, chickens, and cattle (dairy)); some agricultural land located in SFHAs that was prone to flooding; and geographic variations (e.g., East coast, West Cost, the South and the Midwest) of the riverine agricultural areas located in SFHAs across the country. We selected California, Louisiana, North Carolina, and North Dakota as key states. We then interviewed four state floodplain managers from each state to obtain their views on any effects NFIP building requirements and the mandatory purchase requirement have had or could have on farmers and rural residents. In addition, we solicited their input, as well as additional input from three state agricultural extension specialists in California, Louisiana, and North Carolina, in identifying two additional communities in their states that met our criteria. The eight selected communities were: Sutter County, California; Yolo County, California; Rapides County, Louisiana; St. Landry Country, Louisiana; Duplin County, North Carolina; Tyrrell County, North Carolina; Cass Country, North Dakota; and Walsh Country, North Dakota. We interviewed eight local floodplain managers and five agricultural extension service officials in the suggested communities to obtain their views on the effects of NFIP on farmers and rural residents. We also requested the help of the floodplain managers and extension personnel in identifying local farmers and rural residents with properties located in SFHAs. The local officials helped us identify a total of 24 farmers and 10 rural residents from the selected communities. Although we provided the officials with guidance for the characteristics of persons identified, we did not independently verify that all of our criteria were met and acknowledge that some selection bias may be present since we relied on local officials for selecting the farmers to participate in our study. We contacted the people identified for each community. We conducted structured interviews with all farmers and rural residents who had been remapped into SFHAs according to local officials and could provide first-hand perspectives on any challenges they faced in complying with NFIP’s building requirements and the mandatory purchase requirement. We also discussed identified options to address these challenges. We spoke with some farmers and rural residents who had been remapped into SFHAs after their community’s initial flood map had been established and some farmers and rural residents who were not currently mapped into an SFHA. We also spoke with six agricultural lenders about the effect insurance requirements had on farmers and rural residents and with two developers about the effects of the requirements on rural communities. We then summarized all interviews and analyzed them by category of questions: NFIP building requirements, the mandatory purchase requirement, effects on the community, and options to address these challenges. Table 2 shows, for each of the eight selected communities, the number of farmers and rural residents with whom we spoke and the major crops produced by those farmers. We could not obtain the same number of interviews in each community, because the local floodplain managers and agricultural extension specialists who provided referrals to people provided different numbers and types of contacts in each of the selected communities. In addition, the relationships between the local floodplain manager and the contacts sometimes differed, and in some cases a relationship may have affected our ability or inability to obtain an interview with that person. For example, some successful contacts served on community water management task forces with the local floodplain manager. We visited California and Louisiana and interviewed the local farmers and residents. For the other two states (North Carolina and North Dakota), we interviewed the farmers and rural residents by telephone. The purpose of our extensive work in these selected communities was to illustrate and more fully understand farmers’ and residents’ experiences in dealing with NFIP’s requirements. Our individual interviews were not designed to demonstrate the extent of an issue such as a survey might do and we determined that personal contact would prove more reliable in completing interviews with this rural population. In addition, through individual interviews we were able to obtain a more complete understanding of each person’s perspective, the reasons for their opinions or attitudes on specific topics and their insights into concerns related to NFIP requirements, all of which would supplement the information provided by state and local NFIP officials. The combination of design, targeted research questions, multiple sources of information, the use of selected representative communities to address the research questions and systematic analyses all serve to support greater generalizability of our findings. Nevertheless, due to the differing nature of communities and their responses to the NFIP requirements, a possibility exists that had we selected different communities we might have found some different results. We believe that the patterns and consistency of our findings within and across our selected cases support the widespread applicability of our findings. To identify options to address any challenges farmers and rural residents faced in complying with NFIP’s building requirements and the mandatory purchase requirement, we gathered suggestions from local NFIP administrators, local lenders, farmers, and rural residents that we met with during our case studies. We then asked experts from flood management and city and regional planning organizations, cognizant academics, and officials from FEMA to comment on the ideas that we gathered and summarized their views. To determine historical NFIP premium and claims amounts, we analyzed annual NFIP premium data for years 1994-1998 and 2000-2013, and the NFIP claims database as of September 30, 2013 (most recently available fiscal year-end data). We adjusted these premium and claim amounts for inflation to report them in constant 2014 dollars. We conducted electronic testing including checks for outliers and missing data. We also interviewed FEMA officials on the usability and reliability of the data and reviewed our past assessments of these data. We determined these data were sufficiently reliable for our purposes. We determined the premiums and claims attributable to rural and agricultural areas and to urban areas using the ZIP codes for rural, agricultural, and urban areas we found using the method described above. We used 2007 agricultural data and 2010 rural and urban data as the base years for determining whether a ZIP code area was rural, agricultural, or urban. As a result, we may under-represent the premiums and claims attributable to the rural and agricultural areas for earlier years because urban areas have tended to grow larger over time. Data were not available for 1999 and the years prior to 1994 that would allow us to determine the premium amounts comparable to the premium amounts we reported for 1994 through 2013. FEMA told us that the available premium data for 1999 and years prior to 1994 was for all policies that had been in place during the year, as opposed to the policies in force at a specific point in time of each year. Using these data would have resulted in overstated premiums. Also, FEMA told us that in some of the earlier years ZIP codes were not reported consistently from the insurance companies. In some years, ZIP codes were not available at all (1978–1981, 1983, and 1992). We analyzed FEMA data on National Flood Insurance Program (NFIP) premiums and claims from 1994 through 2013 (except 1999) to determine the claims paid to and the premiums taken-in by FEMA from rural and agricultural riverine areas and urban riverine areas. We also analyzed the total premiums and claims for rural and agricultural areas and urban areas on a state-by-state basis for this time period. Overall, our analysis of premiums and claims indicates that in both rural and agricultural and urban areas nationwide, policyholders have historically received more in claims than they have paid in premiums. However, flooding is a highly variable event, with losses differing widely from year to year. Therefore, analysis of historical data can lead to unreliable conclusions about the actual flood risk faced by a given state or area. Also, catastrophic events greatly impact the long-term aggregate experience of a state. While the difference between premiums and claims in rural and agricultural and urban areas is not a meaningful measure of whether policyholders are paying premiums commensurate with their risk because NFIP premiums are intended to cover losses as well as operating expenses, among other reasons, it provides additional descriptive information. Table 3 shows NFIP premiums and claims of policyholders in rural and agricultural areas from 1994 through 2013 (except 1999). This information provides some indication of the trends over this period for rural areas. Similarly, table 4 provides 1994-2013 (except 1999) premium and claims data for urban areas. Table 5 includes available premium and claims data by year in the rural and agricultural riverine areas of each state. Because comparable 1999 premium data were not available, the ratio of claims to premiums for some states may be distorted. In 1999, some states on the east coast experienced large losses from Hurricane Floyd likely resulting in high claim amounts. According to FEMA, for example, NFIP policyholders in the state of North Carolina received over $141 million in claims between September 1999 and June 2000. If the premiums and claims for 1999 were included, the ratio of claims to premiums for states affected by Hurricane Floyd could have been larger. Table 6 provides the same premium and claims information for urban areas by state. Additional study would be required to determine whether policyholders in some states with lower losses are paying a higher premium than is appropriate for their risk and others paying too little. For example, our analysis did not control for differences in the type of policy purchased, such as the mix of certain property types across states and insurance coverage amounts, which could affect both premiums and claims. In addition, we did not control for differences in the mix of subsidized and full-risk policies or the impact of subsidized premiums on our results. As we have reported previously, some states have a relatively large number or proportion of subsidized properties that generally would lead to higher expected claims relative to premiums. The limitations in setting full-risk rates that we discussed in the prior report could result in systematic mispricing relative to risk that becomes apparent only over long periods. Further, the analysis conducted for this report included both subsidized and full-risk properties, and so the results should be considered in this context. The following are some basic characteristics of the selected communities: Sutter County, California; Yolo County, California; Rapides Parish, Louisiana; St. Landry Parish, Louisiana; Duplin County, North Carolina; Tyrrell County, North Carolina; Cass Country, North Dakota; and Walsh Country, North Dakota. Tables 7 to 14 show, for each individual community, the total number of National Flood Insurance Program (NFIP) policies, the number of policies in a special flood hazard area (SFHA), the number of miles of levees in the county, and the top agricultural commodities in the county. Figures 6 to 11 show FEMA’s flood maps for the counties, when available. In addition to the contact named above, Triana McNeil and Jill Naamane (Assistant Directors); Simin Ho (Analyst in Charge); Emily Chalmers; William Chatlos; Barbara El Osta; Melissa Kornblau; John Mingus; Marc Molino; and Ruben Montes De Oca made key contributions to this report. | NFIP helps protect property in high-risk floodplains by, among other things, requiring communities that participate in the program to adopt floodplain management regulations, including building requirements for new or substantially improved structures such as elevating, dry flood-proofing, or wet flood-proofing structures. GAO was asked to evaluate the possible effects of NFIP, including its building requirements, on farmers in riverine areas that have a high risk of flooding. This report examines, among other things, the effects of building requirements on farmers in high-risk areas and options that could help address any challenges farmers face. To do this work, GAO analyzed laws, regulations, and FEMA policy and claims data; interviewed 12 state and local floodplain managers, 24 farmers, and 6 lenders in 8 selected communities in California, Louisiana, North Carolina, and North Dakota (selection based on geographic diversity, presence of high-risk flood areas, and type of farming that required on-site structures); and interviewed flood management and planning experts and FEMA officials. The effects of the National Flood Insurance Program's (NFIP) building requirements for elevating or flood-proofing agricultural structures in high-risk areas varied across selected communities, according to interviews GAO conducted with floodplain managers and farmers. Specifically: Floodplain managers and 12 farmers in selected rural communities with whom GAO spoke in Louisiana, North Carolina, and North Dakota generally were not concerned about these requirements. Most of these farmers told GAO that they had land outside the high-risk areas where they could build or expand their structures, or they could elevate their structures relatively easily. Floodplain managers in selected California communities told GAO that farmers in their communities had been adversely affected by the building requirements. They said that most farm land was in high-risk areas and elevation of structures would be difficult and costly—due to the relatively deep flood depths, structures would be required to be elevated up to 15 feet to comply with the building requirements. They also indicated that some structures were difficult to make watertight below the projected flood level (dry flood-proofing). According to a California floodplain manager and several farmers with whom GAO spoke, the farmers who were adversely affected by the building requirements have had to work around outdated Federal Emergency Management Agency (FEMA) guidance that does not fully address the challenges of vast and relatively deep floodplains or reflect industry changes. For example, the 1993 guidance from FEMA allowed an alternative flood-proofing technique (wet flood-proofing) that permits water to flow through certain agricultural structures in expansive high-risk areas. However, farmers in the California communities told GAO this was not a viable option because pests might enter openings and contaminate crops stored inside. FEMA typically updates guidance as needed but acknowledged the need for additional guidance that covers all of the different types of agricultural structures and reflects recent developments in the size and scale of farm operations, including supporting structures that were expensive to build and replace. Additional and more comprehensive guidance would allow FEMA to better respond to recent developments and structural needs in vast and deep floodplains. Some local floodplain managers, farmers, and lenders from the selected communities identified options to help farmers manage the challenges of building or expanding agricultural structures in high-risk areas, but many of the options would entail certain risks and may run counter to the objectives of NFIP. For example, one commonly cited option calls for exempting agricultural structures from building requirements, with farmers assuming all of the flood risk and opting out of federal disaster relief. Both FEMA and the experts noted such an exemption could set a precedent, leading others to ask for similar exemptions. Further, FEMA officials stated that the agency had no legal authority to allow farmers or any other group to opt out of disaster relief. The Administrator of FEMA should update existing guidance on mitigating the risk of flood damage to agricultural structures to include additional information that reflects recent farming developments and structural needs in vast and deep floodplains. FEMA agreed with the recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Antipsychotic drugs are classified into two sub-groups. The first group, or generation, of antipsychotic drugs—also known as “conventional” or “typical” antipsychotic drugs—was developed in the mid-1950s. Examples include haloperidol (Haldol®) and loxapine (Loxitane®). The second generation of antipsychotic drugs, known as “atypical” antipsychotics, was developed in the 1980s. Examples include aripiprazole (Abilify®) and risperidone (Risperdal®). Atypical antipsychotics became more popular upon their entry into the market due to the initial belief that these drugs caused fewer side effects than the conventional antipsychotics. Each antipsychotic drug has its own set of FDA-approved indications. The vast majority of antipsychotic drugs are FDA-approved for the treatment of schizophrenia, and most atypical antipsychotic drugs are FDA- approved for the treatment of bipolar disorder. In addition, some antipsychotics are FDA-approved for the treatment of Tourette syndrome. CMS guidance to state nursing home surveyors also recognizes antipsychotics as an acceptable treatment for conditions for which the drugs have not been FDA-approved, such as for the treatment of Huntington’s disease. In 2005, FDA recognized the risks associated with atypical antipsychotic drugs and required those drugs to have a boxed warning, citing a higher risk of death related to use among those with dementia. In 2008, FDA recognized similar risks for conventional antipsychotic drugs and required the same boxed warning. Besides the risks described in the boxed warning, use of antipsychotic drugs carries risks of other side effects, such as sedation, hypotension, movement disorders, and metabolic syndrome issues. Clinical guidelines consistently suggest the use of antipsychotic drugs for the treatment of the behavioral symptoms of dementia only when other, non-pharmacological attempts to ameliorate the behaviors have failed, and the individuals pose a threat to themselves or to others. For example, AMDA–The Society for Post-Acute and Long-Term Care Medicine suggests first assessing the scope and severity of the behavior and identifying any environmental triggers for the behavior. A medical evaluation may determine whether the behavioral symptoms are associated with another medical condition, such as under-treated arthritis pain or constipation. In its clinical guideline, AMDA cited conflicting evidence surrounding the effectiveness of antipsychotic drugs in treating the behavioral symptoms of dementia.found significant improvement in symptoms with the treatment of certain atypical antipsychotic drugs, but also noted that other reviews signaled there were no significant differences attributable to atypical antipsychotic drugs. It noted one evidence review that Other non-pharmacological interventions that can be attempted prior to the use of antipsychotic drugs may focus on emotions, sensory stimulation, behavior management, or other psychosocial factors. An example of an emotion-oriented approach is Reminiscence Therapy, which involves the recollection of past experiences through old materials with the intention of enhancing group interaction and reducing depression. An example of a sensory stimulation approach is Snoezelen Therapy, which typically involves introducing the individual to a room full of objects designed to stimulate multiple senses, including sight, hearing, touch, taste, and smell. This intervention is based on the theory that behavioral symptoms may stem from sensory deprivation. A 2012 white paper published by the Alliance for Aging Research and the Administration on Aging, a part of the ACL, noted that advancements have been made with regards to the evidence base supporting some non- pharmacological interventions, but that evidence-based interventions are Experts referenced in the white paper identified not widely implemented.the need for clearer information about the interventions, such as a system to classify what interventions exist and who might benefit from those interventions. Experts also noted that additional research is needed to develop effective interventions. Federal law requires nursing homes to meet federal quality and safety standards, set by CMS, to participate in the Medicare and Medicaid programs. CMS regulations require nursing homes to ensure that residents’ drug therapy regimens are free from unnecessary drugs, such as medications provided in excessive doses, for excessive durations, or Nursing facility staff must assess without adequate indications for use.each resident’s functional capacity upon admission to the facility and periodically thereafter, and provide each resident a written care plan. Based on these assessments, nursing homes must ensure that antipsychotics are prescribed only when necessary to treat a specific condition diagnosed and documented in the patient’s record, and that residents who use antipsychotic drugs receive gradual dose reductions and behavioral interventions, unless clinically contraindicated. Part of the nursing home survey process, otherwise known as nursing home inspections, involves audits of these care plans and assessments. About one-third of older adult Medicare Part D enrollees with dementia who spent over 100 days in a nursing home were prescribed an antipsychotic drug in 2012. Among those Medicare Part D enrollees with dementia who spent no time in a nursing home in 2012, we found that about 14 percent were prescribed an antipsychotic. In total, Medicare Part D plans paid roughly $363 million in 2012 for antipsychotic drugs prescribed for older adult Medicare Part D enrollees with dementia. We found that about 33 percent of Medicare beneficiaries with dementia who were enrolled in a Part D plan and had a long stay in a nursing home—defined as over 100 cumulative days—were prescribed an antipsychotic in 2012. (See table 1.) We also found that prescribing rates for Medicare Part D enrollees with dementia who were nursing home residents varied somewhat by resident characteristic: Male enrollees were slightly more likely to have been prescribed an antipsychotic drug than female enrollees—about 36 percent and 32 percent, respectively. The prescribing rate declined as Medicare Part D enrollee age increased. For example, about 41 percent of those Medicare Part D enrollees aged 66 to 74 received an antipsychotic prescription, compared to 29 percent of those enrollees aged 85 and older who were prescribed an antipsychotic drug. The prescribing rate for antipsychotic drugs was highest for enrollees in the South, and lowest for enrollees in the West. We found slightly lower rates of antipsychotic drug prescribing when we restricted our analysis to those enrollees with three or more 30-day supply prescriptions during 2012. Specifically, about 28 percent of long- stay Medicare Part D enrollees with dementia were given three or more 30-day supply prescriptions for an antipsychotic drug over the course of 2012. We also found that the majority of prescriptions given to those long- stay Medicare Part D enrollees with dementia—about 68 percent—were for seven or more 30-day supplies of the drug, while only 3 percent were for less than one 30-day supply. Consistent with the findings for Medicare Part D enrollees, our analysis of MDS data showed that approximately 30 percent of all older adult nursing home residents—regardless of enrollment in Medicare Part D—with a dementia diagnosis were prescribed an antipsychotic drug at some point during their 2012 nursing home stay. (See fig. 1.) Residents with dementia accounted for a significant proportion of all nursing home residents. In 2012, about 38 percent, or almost 1.1 million of the 2.8 million nursing home residents that year, were diagnosed with dementia. Examining this more comprehensive database of nursing home residents also allowed us to compare the antipsychotic drug prescribing rates of long-stay residents and short-stay residents—those residents who spent 100 days or less in the nursing home. The proportion of residents diagnosed with dementia who were prescribed an antipsychotic drug was greater for long-stay residents than for short-stay residents (about 33 percent versus 23 percent, respectively). (See table 2.) Variation in prescribing rates across resident characteristics was similar to the variation found in the Medicare Part D enrollee long-stay nursing home population. Of those Medicare Part D enrollees with dementia in settings outside of the nursing home, about one in seven (14 percent) were prescribed an antipsychotic. (See fig. 2.) Roughly 1.2 million of the 20.2 million older adult Medicare Part D enrollees living outside of a nursing home in 2012 had a diagnosis of dementia—just above 6 percent. The rate of antipsychotic drug prescribing among older adult Medicare Part D enrollees with dementia was lower for those living outside of nursing homes, compared to those living in nursing homes, given that residents of nursing homes are generally sicker than those living outside of nursing homes. We also found that the pattern of variation in antipsychotic drug prescribing for Medicare Part D enrollees outside of a nursing home for certain characteristics was different from the pattern of variation found in the nursing home population. The proportion of Medicare Part D enrollees outside of nursing homes diagnosed with dementia who were prescribed an antipsychotic drug was higher for older enrollees—the opposite of the pattern found in the nursing home setting. (See table 3.) The prescribing rate was also higher for female enrollees outside of the nursing home than for male enrollees, whereas the opposite was true in the nursing home setting. The prescribing rate for enrollees with dementia outside of the nursing home changed less depending on enrollee location than those in nursing homes. We found slightly lower rates of antipsychotic drug prescribing for Medicare Part D enrollees outside of the nursing home when we restricted our analysis to those enrollees with three or more 30-day supply prescriptions. Specifically, about 11 percent of enrollees outside of the nursing home received three or more prescriptions for antipsychotic drugs over the course of 2012. About 58 percent of antipsychotic prescriptions for Medicare Part D enrollees with dementia living outside of a nursing home were for seven or more 30-day supplies of the drug, while only 3 percent were for less than a 30-day supply. Medicare Part D plans paid roughly $363 million in 2012 for antipsychotic drugs used by Medicare Part D enrollees with dementia aged 66 and older. (See table 4.) Medicare Part D spending on antipsychotic drugs for Medicare Part D enrollees living outside of a nursing home with a dementia diagnosis totaled almost $171 million in 2012, the same as spending for long-stay nursing home enrollees with dementia. Payments for short-stay nursing home enrollees may be low because often Medicare Part A covers drugs administered during short, post-acute stays in nursing homes. Medicare Part D plans consistently spent more than double on antipsychotic prescriptions for female enrollees than for male enrollees; as reported in table 1, the number of female Medicare Part D enrollees using antipsychotic drugs was also over two times that of males. Internal medicine, family medicine, and psychiatry or neurology physicians prescribed the greatest proportion of antipsychotic drug prescriptions for older adult Medicare Part D enrollees with dementia— about 82 percent in total. Antipsychotic drugs prescribed by these specialties also made up about 82 percent of the Medicare Part D plan payments for antipsychotic drugs—almost $298 million in plan payments. Antipsychotic prescriptions from internal medicine physicians comprised 36 percent of Medicare Part D plan payments for antipsychotic drugs, while family medicine and psychiatry or neurology prescriptions comprised about 30 and 16 percent, respectively. Nurse practitioner and physician assistant prescriptions collectively accounted for almost 5 percent of antipsychotic drug claim payments, while the remaining 13 percent encompassed many specialties. Quetiapine Fumarate, Risperidone, and Olanzapine were the most commonly prescribed antipsychotic drugs for older adult Medicare Part D enrollees with dementia in 2012, comprising approximately $246 million in plan payments. (See table 5.) Haloperidol and Aripiprazole were also commonly prescribed; these two drugs were prescribed to almost 9 and 6 percent of Medicare Part D enrollees with dementia, respectively. Experts we spoke with and research we reviewed commonly identified certain factors that are specific to the patient that contribute to antipsychotic prescribing, such as patient agitation or delusions. Experts and research also identified certain contributing factors that are specific to settings, such as to nursing homes or hospitals. The majority of experts we spoke with and some research articles we reviewed highlighted agitation, aggression, or exhibiting a risk to oneself or others as factors that contribute to the decision to prescribe antipsychotics. For example, in a study examining the MDS from 1999 to 2006 in eight states, 51 percent of aggressive nursing home residents diagnosed with dementia were prescribed antipsychotic drugs in 2006, as opposed to 39 percent of residents with behavioral symptoms but who were not aggressive during that same time period. The study suggested that aggressive residents may have been more likely to be prescribed antipsychotics because of the greater risk of injury associated with the aggressive behavior. This is consistent with findings from our analysis of nursing home assessment data; we found that, of residents diagnosed with dementia and documented as being a risk to themselves or others, 61 percent had an antipsychotic drug prescription in 2012. Many experts we interviewed identified other situations that may warrant the use of antipsychotics despite their risk, such as patients experiencing frightening delusions or hallucinations that cause the patient to act out in ways that may be violent or harmful. Several experts noted that individuals experiencing these psychotic and other behaviors may be suffering from distress and are more likely to be prescribed antipsychotic drugs to ease their distress and improve their quality of life. For example, individuals may injure themselves or strike another resident or staff member because of delusions that these people intend to kill them. A few research articles identified psychotic behaviors as a contributing factor. For instance, one study that examined medical records of more than 200 nursing home residents with dementia found that 47 percent of residents who were on an antipsychotic also had a diagnosis of psychosis. The research we reviewed also cited other specific patient characteristics associated with higher antipsychotic use in dementia patients. Patient characteristics such as age, gender, race or ethnicity, and psychiatric diagnoses were associated with higher antipsychotic prescribing in several articles. For example, in one study of nursing home assessments and Medicaid drug claims from seven states, researchers found that nursing home residents with psychiatric co-morbidities, such as anxiety and depression without psychosis, were more likely to be prescribed Male gender was also mentioned as a patient antipsychotic drugs. characteristic associated with higher antipsychotic prescribing in three research articles. In our analyses of 2012 Medicare data, males had a higher prescribing rate in the nursing home, while females had a higher rate outside of the nursing home. Finally, one article found that black nursing home residents were more likely to be prescribed antipsychotic drugs, while another article found that black residents were less likely to receive them when compared to white residents. Experts and research identified factors within the setting that an individual visits or resides in, such as nursing homes or hospitals, as contributing to the decision to prescribe antipsychotic drugs to older adults. Among nursing homes, experts and research cited factors, including the culture of the facility, the level of staff training and education, and the number of staff at the nursing home, as contributing to the decision to prescribe antipsychotic drugs to older adults. Specifically, nursing home leadership—such as administrators and medical directors—and culture were cited by half of the experts and two of the research articles. An expert told us that when the leadership of the nursing home believes it is broadly acceptable to provide antipsychotic drugs to residents with dementia, this belief spreads throughout the facility. One study examining variation in antipsychotic use in nursing homes looked at the pharmacy claims and nursing home assessments of more than 16,000 residents in 1,257 nursing homes.admitted to facilities with high antipsychotic prescribing rates were 1.4 times more likely to receive antipsychotics, even after controlling for patient-specific factors. The study found that new nursing home residents In addition to nursing home culture and leadership, many experts and two research articles identified staff or prescriber education and training on antipsychotic prescribing for individuals with dementia as affecting antipsychotic drug prescribing. One industry group we spoke with indicated that physician training specifically regarding older adults with dementia in nursing homes and knowledge of related federal regulations are often lacking. Similarly, a study in 68 nursing homes in Connecticut examining knowledge of nursing home leaders and staff, who often set the tone for prescribing antipsychotic drugs and observing patients’ behavioral symptoms, found most of the certified nursing assistants— 96 percent—were not aware of the serious risks to residents that can result from antipsychotic use. The study also found that 56 percent of direct-care staff believed medications worked well to manage resident behavior. Another article reported that antipsychotic drug prescribing for individuals with dementia decreased from 20.3 to 15.4 percent in one nursing home after the implementation of an educational in-service training designed to reduce the inappropriate use of antipsychotic prescribing and increase documentation of non-pharmacological interventions.factor that can contribute to minimizing unnecessary antipsychotic prescribing. One provider group noted that, in order to reduce antipsychotic use, a facility would need to invest in professional training for staff in a way that provides information about adequate alternatives to antipsychotic drugs. In expert interviews, education of staff was identified as a Nursing home staffing levels, specifically low staff levels, were also cited as a contributing factor to antipsychotic drug use in one research article and by a few experts. For example, one study examined more than 5,000 nursing homes and 561,000 residents by linking 2009 and 2010 prescription drug claims to the Nursing Home Compare database to identify a nationwide pattern of antipsychotic drug use. The study found the nursing homes with the highest quintiles of antipsychotic drug use had significantly less staff than those with the lowest quintiles. An expert group noted that nursing homes with less staff may not have enough activities and oversight for the patients, which in turn may make the nursing home residents susceptible to higher antipsychotic drug use. In addition, the majority of experts we spoke with told us that entering a nursing home from a hospital is a factor leading to higher antipsychotic prescribing in the nursing home. These experts agreed that antipsychotic drugs are often initiated in hospital settings and carried over to nursing home settings. One industry group we spoke with noted that individuals with dementia go to the hospital frequently and can be prescribed an antipsychotic drug if they exhibit disruptive behavior. Another industry group attributed the actual prescribing of antipsychotic drugs to hospital care culture and stated that the prescribing of antipsychotics is a common practice in hospitals for treating individuals with dementia. A research study that examined the medical charts of 73 residents in seven nursing homes found 84 percent of the residents that had been admitted to the nursing home from the hospital were admitted on at least one psychoactive medication—including antipsychotics. Finally, experts we spoke with indicated that caregivers’ frustration with the behavior of individuals with dementia can lead to requests for antipsychotic drugs. For example, an advocacy group we spoke with mentioned that a caregiver may request an antipsychotic drug for an individual with dementia in an effort to keep them in the home. The individual with dementia may not recognize their relative, which can cause them agitation. To keep the individual calm so that they can stay in the home and not be placed in a nursing home, an antipsychotic medication may be prescribed. Representatives from another provider group explained that when an individual with dementia has an unmet need, they may also appear to be in distress, which may cause the caregiver to become frustrated because they do not know how to relieve this distress. HHS agencies, including CMS, AHRQ, and NIH, have taken actions to address antipsychotic drug use by older adults with dementia in nursing homes. However, HHS has done little to address antipsychotic drug use among older adults with dementia living in settings outside of the nursing home. Under the National Plan to Address Alzheimer’s Disease, HHS has a goal to expand support for people with Alzheimer’s disease and their families with emphasis on maintaining the dignity, safety, and rights for those suffering from this disease. To reach this goal, HHS outlined several actions, including monitoring, reporting, and reducing the use of antipsychotics drugs by older adults in nursing homes. CMS has taken the lead in carrying out this work. Other HHS agencies have also done work related to reducing antipsychotic drug use in nursing homes. In 2012, CMS launched the National Partnership to Improve Dementia Care in Nursing Homes with federal and state agencies, nursing homes, providers, and advocacy organizations. This was in response to several reports dating back to 2001 published by the HHS Inspector General and advocate concerns about the persistently high rate of antipsychotic drug use and quality of care provided to nursing home residents with dementia. The National Partnership began with an initial goal of reducing the national prevalence of antipsychotic drug use in long-stay nursing home residents by at least 15 percent by December 31, 2012. CMS used publicly reported measures from the Nursing Home Compare website to track the progress of the National Partnership and, according to officials, to reach out to those states and individual facilities with high prescribing rates. In the fourth quarter of 2011, which was deemed the baseline, 23.8 percent of long-stay nursing home residents nationwide were prescribed an antipsychotic drug. While the National Partnership did not reach its target reduction in 2012, by the end of 2013 the national use rate decreased to 20.2 percent, a 15.1 percent reduction. The majority of states showed some improvements in their rates; however some states showed much more improvement than others. For example, Delaware showed a 27 percent reduction—from 21.3 to 15.5 percent—in the prevalence of antipsychotic drug use from 2011 through 2013, while Nevada saw a smaller reduction of 2.7 percent—from 20.3 to 19.7 percent—during the same period. The National Partnership is working with state coalitions, as well as nursing homes to reduce this rate even further. In September 2014, CMS established a new set of national goals to reduce the use antipsychotic drugs in long-stay nursing home residents by 25 percent by the end of 2015 and 30 percent by the end of 2016, which, assuming a baseline of 23.8 percent, would lead to a prescribing rate of 16.7 percent. Beginning in January 2015, CMS’s Five- Star Quality Rating System for nursing homes will be based, in part, on this measure of the extent to which antipsychotic drugs are used in the nursing home. The Five-Star Quality Rating System provides a way for consumers to compare nursing homes on the Medicare Web site. Previously, the measure was displayed, but not included in the calculation of each nursing home’s overall quality score. Person-centered care is an approach to care that focuses on residents as individuals and supports caregivers working most closely with them. It involves a continual process of listening, testing new approaches, and changing routines and organizational approaches in an effort to individualize and de-institutionalize the care environment. Medicare beneficiaries, and Advancing Excellence in America’s Nursing Homes Campaign, a major initiative of the Advancing Excellence in Long Term Care Collaborative. The National Partnership includes regular conference calls with states, regions, and advocates, and presentations by experts in the field, to share best practices and brainstorm ways to improve dementia care in their facilities. In addition, CMS has taken four additional actions that aim to reduce antipsychotic drug use among older adults in nursing homes: CMS provided additional guidance and mandatory training around behavioral health and dementia care from 2012 through 2013 to the state surveyors responsible for reviewing and assessing nursing homes. This was done in order to improve surveyors’ ability to identify the use of unnecessary drugs, including inappropriate use of antipsychotic drugs. QIOs have focused some of their efforts on reducing antipsychotic drug use in nursing homes. For example, beginning in 2013, the QIOs provided training to nearly 5,000 nursing homes on the appropriate use of antipsychotic medications. CMS recently concluded pilots of a new dementia-focused survey that examines the use of antipsychotic drugs to older adults with dementia living in nursing homes. CMS reported that the focused survey pilot results will allow the agency to gain new insight about the current survey process, including how the process can be streamlined to more efficiently and accurately identify and cite deficient practices as well as to recognize successful dementia care programs. The pilot consisted of onsite, targeted surveys of dementia care practices in five nursing homes in each of five states. CMS began reporting the rate of chronic use of atypical antipsychotic drugs by older adult Medicare beneficiaries living in nursing homes for This information is publicly available Medicare Part D plans in 2013.on the Medicare Part D Compare Website, which is used by Medicare beneficiaries comparing Medicare Part D plans. The measure used for Medicare Part D plans differs in a few respects from the measure used to assess nursing homes. First, the Medicare Part D measure examines chronic use, defined as having at least 3 months or more of a prescription for an atypical antipsychotic drug, whereas the nursing home measure includes any use. Additionally, the Medicare Part D measure only includes atypical antipsychotic drugs, compared to the nursing home measure, which includes all antipsychotic drugs. Of the 421 Medicare Part D plans reporting in 2012, the rate of use among Medicare Part D enrollees residing in nursing homes ranged from 0 to almost 64 percent. The average among all Medicare Part D plans in 2012 was approximately 22 percent of enrollees residing in nursing homes having at least 3 months or more of a prescription. CMS told us that variation in antipsychotic prescribing among Medicare Part D plans may be explained by the prescribing practice in the plan’s service area, nursing home willingness to allow the use of antipsychotic drugs for the behavioral symptoms of dementia, resident need, and success in implementing interventions to reduce the inappropriate use of antipsychotic drugs. In addition to CMS actions, AHRQ and NIH have awarded research grants for work related to antipsychotic drug use by older adults with dementia in nursing homes. AHRQ has funded individual grants for work related to antipsychotic drug use in nursing homes through its Center for Evidence and Practice Improvement and the Centers for Education & Research on Therapeutics (CERT) program. For example, in 2011, CERT funded several project centers for a 5-year period to study a broad range of health care issues, including Rutgers University, which studied patterns of antipsychotic drug use, along with the safety and effectiveness of antipsychotic drug use for individuals living in nursing homes. Within the NIH, the National Institute on Aging and the National Institute of Mental Health have also funded related research, including a number of studies examining the safety of antipsychotic drugs in older adults. Some stakeholders and other provider groups we spoke with expressed overall support of HHS’s efforts, while others cautioned that the emphasis should not curtail access to those individuals who need antipsychotic drugs. Specifically, stakeholders indicated that the collaboration between public and private organizations, as part of the National Partnership, along with the sharing of practices aimed at reducing antipsychotic drug use, contributed to the campaign’s success. Stakeholders also mentioned that the National Partnership allowed nursing homes to pay attention and start talking about issues related to antipsychotic drug use. Some stakeholders further indicated that HHS’s initiatives have brought focus to the issue of antipsychotic drug use in older adults in nursing homes. Conversely, other groups and individuals involved in HHS’s efforts expressed concern that the emphasis on reducing antipsychotic drug use in nursing homes could result in some individuals who need these medications not receiving them. One researcher we spoke with noted that because nursing homes’ use of antipsychotic drug use is measured and publicly reported, these facilities may be worried about their antipsychotic drug rate and focus on the bottom-line number instead of what is good for the individual. CMS officials told us that they are careful in their messaging to acknowledge that antipsychotic drugs have a useful prescribing purpose and therefore will never be totally eliminated. They are working with providers to develop a comprehensive view of what a patient potentially needs, emphasizing that using antipsychotic drugs should not be the first-line intervention. While the National Alzheimer’s Plan was established to improve care for all individuals with dementia regardless of the setting where they reside, HHS efforts related to reducing antipsychotic drug use among older adults have primarily focused on those living in nursing homes with less activity geared toward those living outside of nursing homes. HHS officials noted that the focus has been on reducing antipsychotic drug use rates in nursing homes for a variety of reasons, including the severity of dementia among nursing home residents and the agency’s responsibility to ensure appropriate training of nursing home staff. However, the risk of antipsychotic drugs to older adults is not specific to those in nursing homes. Furthermore, we found that 1 in 7 Medicare Part D enrollees with dementia outside of the nursing home were prescribed an antipsychotic drug in 2012. We identified one activity by HHS’s ACL that examined a topic related to the use of antipsychotic drugs, specifically the use of non- pharmacological interventions in the treatment of individuals with dementia. In 2012, ACL partnered with a research group to conduct a study on non-pharmacological treatments and care practices for individuals with dementia and their caregivers. The study results were presented in a white paper and disseminated on the ACL’s Web page. ACL also included the study results in a newsletter distributed to state organizations on aging. ACL officials also told us that they participate in the National Partnership as a stakeholder organization, including reviewing the training materials that were distributed to nursing homes. However, ACL officials told us that none of their other past activities have dealt specifically with reducing antipsychotic drug use among older adults outside of nursing homes. While ACL has not focused on reducing antipsychotic drug use among older adults outside of nursing homes, ACL is responsible for other parts of the National Alzheimer’s Plan related to improving dementia care in the community. ACL partners with national groups to share information on dementia-related issues such as caring for minority populations with dementia and preventing elder abuse and neglect. As part of this work, ACL works with organizations, such as the Alliance for Aging Research and the National Family Caregiver Alliance, to share research, host webinars and presentations, and promote issues through social media. ACL also funds grants for state long-term care ombudsmen that are responsible for advocating for older adults living in nursing homes, assisted living facilities, and other residential settings for older adults. Stakeholder groups we spoke to indicated that educational efforts similar to those provided under the National Partnership should be extended to those providing care to older adults in other settings, such as hospitals and assisted living facilities. Some stakeholders noted that some of the same material regarding non-pharmacological interventions could be shared with caregivers in these other care settings. Many experts we spoke with said that many nursing home residents come to the nursing home already on an antipsychotic drug. Extending educational efforts to caregivers and providers outside of the nursing home could help lower the use of antipsychotics among older adults with dementia living both inside and outside of nursing homes. The decision to prescribe an antipsychotic drug to an older adult with dementia is dependent on a number of factors, according to experts in the field, and must take into account the possible benefits of managing behavioral symptoms associated with dementia against potential adverse health risks. In some cases, the benefits to prescribing the drugs may outweigh the risks. HHS has taken important steps to educate and inform nursing home providers and staff on the need to reduce unnecessary antipsychotic drug use and ways to incorporate non-pharmacological practices into their care to address the behavioral symptoms associated with dementia. However, similar efforts have not been directed toward caregivers of older adults living outside of nursing homes, such as those in assisted living facilities and private residences. Targeting this segment of the population is equally important given that over 1.2 million Medicare Part D enrollees living outside of nursing homes were diagnosed with dementia in 2012 and Medicare Part D pays for antipsychotic drugs prescribed to these individuals. While the extent of unnecessary prescribing of antipsychotic drugs is unknown, older adults with dementia living outside of nursing homes are also at risk of the same dangers associated with taking antipsychotics drugs as residents of nursing homes. In fact, the National Alzheimer’s Project Act was not limited to the nursing home setting, but calls upon HHS to develop and implement an integrated national plan to address dementia. HHS’s National Alzheimer’s Plan addresses antipsychotic drug prescribing in nursing homes only, however, and HHS activities to reduce such drug use have primarily focused on older adults residing in nursing homes. Given that HHS does not specifically target its outreach and education efforts relating to antipsychotic drug use to settings other than nursing homes, older adults living outside of nursing homes, their caregivers, and their clinicians in these settings may not have access to the same resources about alternative approaches to care. By expanding its outreach and educational efforts to settings outside nursing homes, HHS may be able to help reduce any unnecessary reliance on antipsychotic drugs for the treatment of behavioral symptoms of dementia for all older adults regardless of their residential setting. We recommend that the Secretary of HHS expand its outreach and educational efforts aimed at reducing antipsychotic drug use among older adults with dementia to include those residing outside of nursing homes by updating the National Alzheimer’s Plan. We provided a draft of this report to HHS for comment. In its written response, reproduced in appendix III, HHS concurred with our recommendation, stating that the agency will support efforts to update the National Alzheimer’s Plan through continued participation on the Federal National Alzheimer’s Project Act Advisory Council. HHS also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix describes our methodology for analyzing the 2012 prescribing of antipsychotic drugs for older adults with dementia in nursing homes and other settings, as well as for analyzing Medicare Part D plan payments for these antipsychotic drug prescriptions. It also describes our efforts to ensure the reliability of the data. We used two primary data sources to examine antipsychotic drug prescribing for older adults with dementia: the Medicare Part D Prescription Drug Event (PDE) data to identify antipsychotic drug prescribing for Medicare Part D enrollees in and outside of the nursing home, and the Long Term Care Minimum Data Set (MDS) to identify antipsychotic drug prescribing for all nursing home residents, regardless of Medicare Part D enrollment. To estimate the extent to which older adults residing inside and outside of nursing homes are prescribed antipsychotic drugs, we first analyzed 2012 PDE data for individuals with dementia. We used the Medicare Part D PDE data because Medicare is the primary source of insurance coverage for individuals over the age of 65 and approximately 63 percent of Medicare beneficiaries were enrolled in Medicare Part D in 2012. To identify individuals living in nursing homes, we combined the PDE claims data with data from the MDS, which includes nursing home assessments for all individuals living in nursing homes, regardless of insurance coverage. We also used data from the Medicare Master Beneficiary Summary File (MBSF), as well as the Medicare Part D Risk File to identify diagnoses, including dementia diagnoses and diagnoses for certain conditions for which the Food and Drug Administration (FDA) has approved the use of antipsychotics drugs. We excluded from our estimates individuals with dementia also diagnosed with one of these FDA-approved conditions for antipsychotic drugs—schizophrenia and bipolar disorder. The Medicare Part D Risk File contains diagnoses based on claims from the previous year for each enrollee, so our diagnosis categories may be conservative estimates as they did not take into account longer-standing or newer diagnoses. We also excluded enrollees with outlier data, enrollees with less than 12 months of Medicare Part D enrollment in 2012, and those enrollees who died in 2012 because they did not have complete Medicare Part D data for the entire year. Finally, we excluded enrollees who resided outside of the 50 states and the District of Columbia. For these analyses, we define an individual as having been prescribed an antipsychotic drug if they were prescribed at least one prescription for an antipsychotic drug during the year, regardless of how many days supply are covered by the prescription. We identified relevant national drug codes (NDC) using a list of generic names for antipsychotic drugs, and, using those codes, we determined the number and percent of Medicare Part D enrollees who were prescribed an antipsychotic drug in 2012. The specific drugs included are listed in table 6. Within the nursing home population, our analysis of PDE data specifically identified those with a long stay in the nursing home—defined by the Centers for Medicare & Medicaid Services (CMS) as more than 100 days—because drugs for individuals with short stays—100 days or less—are generally covered under Medicare Part A, not Part D. We disaggregated the data to examine certain characteristics, such as gender, age, and geographic location. To supplement our analysis of the Medicare Part D data for the nursing home population, we also analyzed 2012 data on antipsychotic prescribing and diagnoses among nursing home residents available in the MDS. This allowed us to look at a more comprehensive population of nursing home residents—all residents in a Medicare or Medicaid certified nursing home—and to examine prescribing rates by length of stay, using steps identified by CMS based on dates reported in the nursing home assessments. In addition to excluding residents with dementia also diagnosed with schizophrenia and bipolar disorder, we excluded residents with Tourette syndrome, a condition for which FDA has approved the use of certain antipsychotics, as well as Huntington’s disease, a condition for which CMS guidance has recognized antipsychotics as an acceptable treatment. Individuals with both dementia and at least one of these diagnoses accounted for about 7 percent of nursing home residents with dementia overall. We also excluded residents with outlier identification codes or other outlier data, residents under the age of 65, and residents in facilities outside of the 50 states and the District of Columbia. We included only those residents that lived through 2012 so that there was a complete year of data for each resident and because antipsychotic drugs can be used in a hospice setting to make residents more comfortable at the end of their lives. For this analysis, we determined an individual was prescribed an antipsychotic drug if any nursing home assessment during 2012 indicated the resident took an antipsychotic drug during the previous 7 days, and we include any instance where antipsychotic use is We disaggregated the data to examine certain documented.characteristics, such as gender, age, and geographic location. To identify what Medicare Part D plans paid for antipsychotic drugs prescribed to older adults with dementia in 2012, we identified individuals with dementia using the Medicare Part D Risk File, and calculated plan payments for those enrollees using the PDE claims data. We also calculated plan payments for the most commonly prescribed antipsychotic drugs, and used the National Plan and Provider Enumeration System (NPPES) to identify the breakdown of prescriber specialties listed on antipsychotic drug claims under Medicare Part D in 2012 to calculate the share of plan payments for prescriptions from the specialties with the most antipsychotic prescribing for individuals with dementia. We ensured the reliability of the MDS data, Medicare PDE data, Medicare Part D Risk File data, MBSF data, Red Book data, and NPPES data used in this report by performing appropriate electronic data checks, reviewing relevant documentation, and interviewing officials and representatives knowledgeable about the data, where necessary. We found the data were sufficiently reliable for the purpose of our analyses. We conducted this performance audit from January 2014 through January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify what is known from published research about factors contributing to the prescribing of antipsychotic drugs to older adults with dementia, we conducted a literature search among recently published articles; specifically, we searched for relevant articles published from January 1, 2009, through March 31, 2014. We conducted a structured search of various databases for relevant peer reviewed and industry journals including MEDLINE, BIOSIS Previews, and ProQuest. Key terms included various combinations of “antipsychotic,” “dementia,” “elderly,” “older adults,” “nursing homes,” “community,” “assisted living,” “home health,” “medication management,” and “medication monitoring.” From all database sources, we identified 386 articles. We first reviewed the abstracts for each of these articles for relevancy in identifying contributing factors related to the use of antipsychotic drugs both inside and outside of nursing homes. For those articles we found relevant, we reviewed the full article and excluded those where the research (1) was conducted outside the United States; (2) included individuals less than 65 years of age; or (3) was an editorial submission. We added one article that could be linked to original research outside of the research cited in the article. After excluding these articles and including others, 42 articles remained: 22 focused on nursing homes; 11 focused on settings outside of nursing homes; 7 focused on both settings; and in 2 articles, the settings were either unclear or undetermined. Articles were then coded by analysts according to whether they identified contributing factors for use of antipsychotic drugs. We found 18 that contained detailed reasons that contribute to antipsychotic drug use among older adults: Bowblis, J. R., S. Crystal, O. Intrator, and J. A. Lucas. “Response to Regulatory Stringency: The Case of Antipsychotic Medication Use in Nursing Homes.” Health Economics, vol. 21 (2012). Briesacher, B. A., J. Tjia, T. Field, K. M. Mazor, J. L. Donovan, A. O. Kanaan, L. R. Harrold, C. A. Lemay, and J. H. Gurwitz. “ Nationwide Variation in Nursing Home Antipsychotic Use, Staffing and Quality of Care.” Abstracts of the 28th ICPE 2012, (2012). Briesacher, B. A., J. Tjia, T. Field, D. Peterson, and J. H. Gurwitz. “Antipsychotic use Among Nursing Home Residents.” The Journal of American Medical Association, vol. 309, no. 5 (2013). Chen, Y., B. A. Briesacher, T. S. Field, J. Tjia, D. T. Lau, and J. H. Gurwitz. “Unexplained Variation across U.S. Nursing Homes in Antipsychotic Prescribing Rates.” Archives of Internal Medicine, vol. 170, no. 1 (2010). Crystal, S., M. Oflson, C. Huang, H. Pincus, and T. Gerhard. “Broadened Use of Atypical Antipsychotics: Safety, Effectiveness, and Policy Challenges: Expanded Use of these Medications, Frequently Off-label, Has Often Outstripped the Evidence Base for the Diverse Range of Patients Who Are Treated with Them.” Health Affairs, vol. 28, no. 5 (2009). Department of Health and Human Services – Office of Inspector General, Medicare Atypical Antipsychotic Drug Claims for Elderly Nursing Home Residents,” OEI-07-08-00150, May 2011. Fung, V., M. Price, A. B. Busch, M. B. Landrum, B. Fireman, A. Nierenberg, W. H. Dow, R. Hui, R. Frank, J. P. Newhouse, and J. Hsu. “Adverse Clinical Events among Medicare Beneficiaries Using Antipsychotic Drugs: Linking Health Insurance Benefits and Clinical Needs.” Medical Care, vol. 51, no. 7 (2013). Healthcare Management Solutions, LLC and the Meyers Primary Care Institute at the University of Massachusetts Medical School. Antipsychotic Drug Use Project Final Report (Columbia, Md.: January 2013). Kamble, P., J. Sherer, H. Chen, and R. Aparasu. “Off-Label Use of Second-Generation Antipsychotic Agents among Elderly Nursing Home Residents.” Psychiatric Services, vol. 61, no. 2 (2010). Kamble, P., H. Chen, J. T. Sherer, and R. R. Aparasu. “Use of Antipsychotics among Elderly Nursing Home Residents with Dementia in the United States: An Analysis of National Survey Data.” Drugs & Aging, vol. 26, no. 6 (2009). Lemay, C. A., K. M. Mazor, T. S. Field, J. Donovan, A. Kananaan, B. A. Briesacher, S. Foy, L. R. Harrold, J. H. Gurwitz, and J. Tjia. “Knowledge of and Perceived Need for Evidence-Based Education about Antipsychotic Medications among Nursing Home Leadership and Staff.” The Journal of American Medical Directors Association, vol. 14, no. 12 (2013). Lucas, J. A., S. Chakravarty, J. R. Bowblis, T. Gerhard, E. Kalay, E. K. Paek, and S. Crystal. “Antipsychotic Medication Use in Nursing Homes: A Proposed Measure of Quality.” International Journal of Geriatric Psychiatry (2014). Molinari, V. A., D. A. Chiriboga, L. G. Branch, J. Schinka, L. Shonfeld, L. Kos, W. L. Mills, J. Krok, and K. Hyer. “Reasons for Psychiatric Medication Prescription for New Nursing Home Residents.” Aging & Mental Health, vol. 15, no.7 (2011). Rhee, Y., J. G. Cernansky, L. L. Emanuel, C. G. Chang, and J. W. Shega. “Psychotropic Medication Burden and Factors Associated with Antipsychotic Use: An Analysis of a Population-Based Sample of Community-Dwelling Older Persons with Dementia.” The Journal of American Geriatrics Society, no. 59 (2011). Saad, M., M. Cassagnol, and E. Ahmed. “The Impact of FDA’s Warning on the Use of Antipsychotics in Clinical Practice: A Survey.” The Consultant Pharmacist, vol. 25, no. 11 (2010). Sapra, M., A. Varma, R. Sethi, I. Vahia, M. Chowdhury, K. Kim, and R. Herbertson. “Utilization of Antipsychotics in Ambulatory Elderly with Dementia in an Outpatient Setting.” Federal Practitioner, (2012). Tjia, J., T. Field, C. Lemay, K. Mazor, M. Pandolfi, A. Spenard, S. Ho, A. Kanaan, J. Donovan, J. H. Gurwitz, and B. Briesacher. “Antipsychotic Use in Nursing Homes Varies By Psychiatric Consultant.” Medical Care, vol. 52, no. 3. (2014). Watson-Wolfe, K., E. Galik, J. Klinedinst, and N. Brandt. “Application of the Antipsychotic Use in Dementia Assessment Audit Tool to Facilitate Appropriate Antipsychotic Use in Long Term Care Residents with Dementia.” Geriatric Nursing, vol. 35 (2014). In addition to the contact named above, Lori Achman, Assistant Director; Todd D. Anderson; Shaunessye D. Curry; Leia Dickerson; Sandra George; Kate Nast Jones; Ashley Nurhussein-Patterson; and Laurie Pachter made key contributions to this report. | Dementia affects millions of older adults, causing behavioral symptoms such as mood changes, loss of communication, and agitation. Concerns have been raised about the use of antipsychotic drugs to address the behavioral symptoms of the disease, primarily due to the FDA's boxed warning that these drugs may cause an increased risk of death when used by older adults with dementia and the drugs are not approved for this use. GAO was asked to examine psychotropic drug prescribing for older adult nursing home residents. In this report, GAO examined (1) to what extent antipsychotic drugs are prescribed for older adults with dementia living inside and outside nursing homes, (2) what is known from selected experts and published research about factors contributing to the such prescribing, and (3) to what extent HHS has taken action to reduce the use of antipsychotic drugs by older adults with dementia. GAO analyzed multiple data sources including 2012 Medicare Part D drug event claims and nursing home assessment data; reviewed research and relevant federal guidance and regulations; and interviewed experts and HHS officials. Antipsychotic drugs are frequently prescribed to older adults with dementia. GAO's analysis found that about one-third of older adults with dementia who spent more than 100 days in a nursing home in 2012 were prescribed an antipsychotic, according to data from Medicare's prescription drug program, also known as Medicare Part D. Among Medicare Part D enrollees with dementia living outside of a nursing home that same year, about 14 percent were prescribed an antipsychotic. (See figure.) Experts and research identified patient agitation or delusions, as well as certain setting-specific characteristics, as factors contributing to the prescribing of antipsychotics to older adults. For example, experts GAO spoke with noted that antipsychotic drugs are often initiated in hospital settings and carried over when older adults are admitted to a nursing home. In addition, experts and research have reported that nursing home staff levels, particularly low staff levels, lead to higher antipsychotic drug use. Agencies within the Department of Health and Human Services (HHS) have taken several actions to address antipsychotic drug use by older adults in nursing homes, as described in HHS's National Alzheimer's Plan; however, none have been directed to settings outside of nursing homes, such as assisted living facilities or individuals' homes. While the National Alzheimer's Plan has a goal to improve dementia care for all individuals regardless of residence, HHS officials said that efforts to reduce antipsychotic use have not focused on care settings outside nursing homes, though HHS has done work to support family caregivers in general. Stakeholders GAO spoke to indicated that educational efforts similar to those provided for nursing homes should be extended to other settings. Extending educational efforts to caregivers and providers outside of the nursing home could help lower the use of antipsychotics among older adults with dementia living both inside and outside of nursing homes. GAO recommends that HHS expand its outreach and educational efforts aimed at reducing antipsychotic drug use among older adults with dementia to include those residing outside of nursing homes by updating the National Alzheimer's Plan. HHS concurred with this recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOE is responsible for a diverse set of missions, including nuclear security, energy research, and environmental cleanup. These missions are managed by various organizations within DOE and largely carried out by management and operating (M&O) contractors at DOE sites. According to federal budget data, NNSA is one of the largest organizations in DOE, overseeing nuclear weapons and nonproliferation- related missions at its sites. With a $10.5 billion budget in fiscal year 2011—nearly 40 percent of DOE’s total budget—NNSA is responsible for providing the United States with safe, secure, and reliable nuclear weapons in the absence of underground nuclear testing and maintaining core competencies in nuclear weapons science, technology, and engineering. Under DOE’s long-standing model of having unique M&O contractors at each site, management of its sites has historically been decentralized and, thus, fragmented. Since the Manhattan Project produced the first atomic bomb during World War II, NNSA, DOE, and predecessor agencies have depended on the expertise of private firms, universities, and others to carry out research and development work and efficiently operate the facilities necessary for the nation’s nuclear defense. DOE’s relationship with these entities has been formalized over the years through its M&O contracts—agreements that give DOE’s contractors unique responsibility to carry out major portions of DOE’s missions and apply their scientific, technical, and management expertise. Currently, DOE spends 90 percent of its annual budget on M&O contracts, making it the largest non-Department of Defense contracting agency in the government. The contractors at DOE’s NNSA sites have operated under DOE’s direction and oversight but largely independently of one another. Various headquarters and field-based organizations within DOE and NNSA develop policies and NNSA site offices, collocated with NNSA’s sites, conduct day-to-day oversight of the M&O contractors, and evaluate the contractors’ performance in carrying out the sites’ missions. As we have reported since 1999, NNSA has not had reliable enterprise- wide budget and cost data, which potentially increases risk to NNSA’s programs. Specifically: In July 2003 and January 2007, we reported that NNSA lacked a planning and budgeting process that adequately validated contractor- prepared cost estimates used in developing annual budget requests. Establishing this process was required by the statute that created NNSA—Title 32 of the National Defense Authorization Act for Fiscal Year 2000. In particular, NNSA had not established an independent analysis unit to review program budget proposals, confirm cost estimates, and analyze budget alternatives. At the request of the Subcommittee on Energy and Water Development, Senate Committee on Appropriations, we are currently reviewing NNSA’s planning and budgeting process, the extent to which NNSA has established criteria for evaluating resource trade-offs, and challenges NNSA has faced in validating its budget submissions. We expect to issue a report on this work later this year. In June 2010, we reported that NNSA could not identify the total costs to operate and maintain essential weapons activities’ facilities and infrastructure. Furthermore, we found that contractor-reported costs to execute the scope of work associated with operating and maintaining these facilities and infrastructure likely significantly exceeded the budget for this program that NNSA justified to Congress. We reported in February 2011 that NNSA lacked complete data on (1) the condition and value of its existing infrastructure, (2) cost estimates and completion dates for planned capital improvement projects, (3) shared-use facilities within the nuclear security enterprise, and (4) critical human capital skills in its M&O contractor workforce that are needed to maintain the Stockpile Stewardship Program. As a result, NNSA does not have a sound basis for making decisions on how to most effectively manage its portfolio of projects and other programs and will lack information that could help justify future budget requests or target cost savings opportunities. uncertainty over future federal budgets.to compare or quantify total savings across sites because guidance for estimating savings is unclear and the methods used to estimate savings vary between sites. We found that it was difficult The administration plans to request $88 billion from Congress over the next decade to modernize the nuclear security enterprise and ensure that base scientific, technical, and engineering capabilities are sufficiently supported and the nuclear deterrent can continue to be safe, secure, and reliable. To adequately justify future presidential budget requests, NNSA must accurately identify these base capabilities and determine their costs. Without this information, NNSA risks being unable to identify return on its investment or opportunities for cost savings or to make fully informed decisions on trade-offs in a resource-constrained environment. NNSA, recognizing that its ability to make informed enterprise-wide decisions is hampered by the lack of comprehensive data and analytical tools, is considering the use of computer models—quantitative tools that couple data from each site with the functions of the enterprise—to integrate and analyze data to create an interconnected view of the enterprise, which may help to address some of the critical shortcomings we identified. In July 2009, NNSA tasked its M&O contractors to form an enterprise modeling consortium. NNSA stated that the consortium is responsible for leading efforts to acquire and maintain enterprise data, enhance stakeholder confidence, integrate modeling capabilities, and fill in any gaps that are identified. The consortium has identified areas in which enterprise modeling projects could provide NNSA with reliable data and modeling capabilities, including capabilities on infrastructure and critical skills needs. In addition, we recently observed progress on NNSA’s development of an Enterprise Program Analysis Tool that should give NNSA greater insight into its sites’ cost reporting. The Tool also includes a mechanism to identify when resource trade-off decisions must be made, for example, when contractor-developed estimates for program requirements exceed the budget targets provided by NNSA for those programs. A tool such as this one could help NNSA obtain the basic data it needs to make informed management decisions, determine return on investment, and identify opportunities for cost saving. A basic tenet of effective management is the ability to complete projects on time and within budget. However, for more than a decade and in numerous reports, we have found that NNSA has continued to experience significant cost and schedule overruns on its major projects, principally because of ineffective oversight and poor contractor management. Specifically: In August 2000, we found that poor management and oversight of the National Ignition Facility construction project at Lawrence Livermore National Laboratory had increased the facility’s cost by $1 billion and delayed its scheduled completion date by 6 years. Among the many causes for the cost overruns or schedule delays, DOE and Livermore officials responsible for managing or overseeing the facility’s construction did not plan for the technically complex assembly and installation of the facility’s 192 laser beams. They also did not use independent review committees effectively to help identify and correct issues before they turned into costly problems. Similarly, in April 2010, we reported that weak management by DOE and NNSA had allowed the cost, schedule, and scope of ignition-related activities at the National Ignition Facility to increase substantially., Since 2005, ignition-related costs have increased by around 25 percent—from $1.6 billion to over $2 billion—and the planned completion date for these activities has slipped from the end of fiscal year 2011 to the end of fiscal year 2012 or beyond. We have issued several reports on the technical issues, cost increases, and schedule delays associated with NNSA’s efforts to extend, through refurbishment, the operational lives of nuclear weapons in the stockpile. For example, in December 2000, we reported that refurbishment of the W87 strategic warhead had experienced significant design and production problems that increased its refurbishment costs by over $300 million and caused schedule delays of about 2 years. Similarly, in March 2009 we reported that NNSA and the Department of Defense had not effectively managed cost, schedule, and technical risks for the B61 nuclear bomb and the W76 nuclear warhead refurbishments. For the B61 life extension program, NNSA was only able to stay on schedule by significantly reducing the number of weapons undergoing refurbishment and abandoning some refurbishment objectives. In the case of the W76 nuclear warhead, NNSA experienced a 1-year delay and an unexpected cost increase of nearly $70 million as a result of its ineffective management of one the highest risks of the program— the manufacture of a key material known as Fogbank, which NNSA did not have the knowledge, expertise, or facilities to manufacture. In October 2009, we reported on shortcomings in NNSA’s oversight of the planned relocation of its Kansas City Plant to a new, more modern facility. Rather than construct a new facility itself, NNSA chose to have a private developer build it. NNSA would then lease the building through the General Services Administration for a period of 20 years. However, when choosing to lease rather than construct a new facility itself, NNSA allowed the Kansas City Plant to limit its cost analysis to a 20-year life cycle that has no relationship with known requirements of the nuclear weapons stockpile or the useful life of a production facility that is properly maintained. As a result, NNSA’s financing decisions were not as fully informed and transparent as they could have been. If the Kansas City Plant had quantified potential cost savings to be realized over the longer useful life of the facility, NNSA may have made a different decision as to whether to lease or construct a new facility itself. We reported in March 2010 that NNSA’s plutonium disposition program was behind schedule in establishing a capability to produce the plutonium feedstock necessary to operate its Mixed-oxide Fuel Fabrication facility currently being constructed at DOE’s Savannah River Site in South Carolina. In addition, NNSA had not sufficiently assessed alternatives to producing plutonium feedstock and had only identified one potential customer for the mixed-oxide fuel the facility would produce. In its fiscal year 2012 budget justification to Congress, NNSA reported that it did not have a construction cost baseline for the facility needed to produce the plutonium feedstock for the mixed-oxide fuel, although Congress had already appropriated over $270 million through fiscal year 2009 and additional appropriation requests totaling almost $2 billion were planned through fiscal year 2016. NNSA stated in its budget justification that it is currently considering options for producing necessary plutonium feedstock without constructing a new facility. GAO, Nuclear Weapons: National Nuclear Security Administration’s Plans for Its Uranium Processing Facility Should Better Reflect Funding Estimates and Technology Readiness, GAO-11-103 (Washington, D.C.: Nov. 19, 2010). Senate Committee on Appropriations. We plan to issue our report next month. As discussed above, NNSA remains on our high-risk list and remains vulnerable to fraud, waste, abuse, and mismanagement. DOE has recently taken a number of actions to improve management of major projects, including those overseen by NNSA. For example, DOE has updated program and project management policies and guidance in an effort to improve the reliability of project cost estimates, better assess project risks, and better ensure project reviews that are timely, useful and identify problems early. However, DOE needs to ensure that NNSA has the capacity—that is, the people and other resources—to resolve its project management difficulties and that it has a program to monitor and independently validate the effectiveness and sustainability of its corrective measures. This is particularly important as NNSA embarks on its long- term, multibillion dollar effort to modernize the nuclear security enterprise. Another underlying reason for the creation of NNSA was a series of security issues at the national laboratories. Work carried out at NNSA’s sites may involve plutonium and highly enriched uranium, which are extremely hazardous. For example, exposure to small quantities of plutonium is dangerous to human health, so that even inhaling a few micrograms creates a long-term risk of lung, liver, and bone cancer and inhaling larger doses can cause immediate lung injuries and death. Also, if not safely contained and managed, plutonium can be unstable and spontaneously ignite under certain conditions. NNSA’s sites also conduct a wide range of other activities, including construction and routine maintenance and operation of equipment and facilities that also run the risk of accidents, such as those involving heavy machinery or electrical mishaps. The consequences of such accidents could be less severe than those involving nuclear materials, but they could also lead to long-term illnesses, injuries, or even deaths among workers or the public. Plutonium and highly enriched uranium must also be stored under extremely high security to protect it from theft or terrorist attack. In numerous reports, we have expressed concerns about NNSA’s oversight of safety and security across the nuclear security enterprise. With regard to nuclear and worker safety: In October 2007, we reported that there had been nearly 60 serious accidents or near misses at NNSA’s national laboratories since 2000. These incidents included worker exposure to radiation, inhalation of toxic vapors, and electrical shocks. Although no one was killed, many of the accidents caused serious harm to workers or damage to facilities. For example, at Los Alamos in July 2004, an undergraduate student who was not wearing required eye protection was partially blinded in a laser accident. Accidents and nuclear safety violations also contributed to the temporary shutdown of facilities at both Los Alamos and Livermore in 2004 and 2005. In the case of Los Alamos, laboratory employees disregarded established procedures and then attempted to cover up the incident, according to Los Alamos officials. Our review of nearly 100 reports issued since 2000 found that the contributing factors to these safety problems generally fell into three key categories: (1) relatively lax laboratory attitudes toward safety procedures; (2) laboratory inadequacies in identifying and addressing safety problems with appropriate corrective actions; and (3) inadequate oversight by NNSA. We reported in January 2008 on a number of long-standing nuclear and worker safety concerns at Los Alamos.included, among other things, the laboratory’s lack of compliance with safety documentation requirements, inadequate safety systems, radiological exposures, and enforcement actions for significant violations of nuclear safety requirements that resulted in civil penalties totaling nearly $2.5 million. In October 2008, we reported that DOE’s Office of Health, Safety, and Security—which, among other things, develops, oversees, and helps enforce nuclear safety policies at DOE and NNSA sites—fell short of fully meeting our elements of effective independent oversight of nuclear safety.independently was limited because it had no role in reviewing technical analyses that help ensure safe design and operation of nuclear facilities, and the office had no personnel at DOE sites to provide independent safety observations. With regard to security: In June 2008, we reported that significant security problems at Los Alamos had received insufficient attention. The laboratory had over two dozen initiatives under way that were principally aimed at reducing, consolidating, and better protecting classified resources but had not implemented complete security solutions to address either classified parts storage in unapproved storage containers or weaknesses in its process for ensuring that actions taken to correct security deficiencies were completed. Furthermore, Los Alamos had implemented initiatives that addressed a number of previously identified security concerns but had not developed the long-term strategic framework necessary to ensure that its fixes would be sustained over time. Similarly, in October 2009, we reported that Los Alamos had implemented measures to enhance its information security controls, but significant weaknesses remained in protecting the information stored on and transmitted over its classified computer network. A key reason for this was that the laboratory had not fully implemented an information security program to ensure that controls were effectively established and maintained. In March 2009, we reported about numerous and wide-ranging security deficiencies at Livermore, particularly in the ability of Livermore’s protective force to assure the protection of special nuclear material and the laboratory’s protection and control of classified matter. Livermore’s physical security systems, such as alarms and sensors, and its security program planning and assurance activities were also identified as areas needing improvement. Weaknesses in Livermore’s contractor self-assessment program and the NNSA Livermore Site Office’s oversight of the contractor contributed to these security deficiencies at the laboratory. According to one DOE official, both programs were “broken” and missed even the “low-hanging fruit.” The laboratory took corrective action to address these deficiencies, but we noted that better oversight was needed to ensure that security improvements were fully implemented and sustained. We reported in December 2010 that NNSA needed to improve its contingency planning for its classified supercomputing operations. All three NNSA laboratories had implemented some components of a contingency planning and disaster recovery program, but NNSA had not provided effective oversight to ensure that the laboratories’ contingency and disaster recovery planning and testing were comprehensive and effective. In particular, NNSA’s component organizations, including the Office of the Chief Information Officer, were unclear about their roles and responsibilities for providing oversight in the laboratories’ implementation of contingency and disaster recovery planning. In March 2010, the Deputy Secretary of Energy announced a new effort— the 2010 Safety and Security Reform effort—to revise DOE’s safety and security directives and reform its oversight approach to “provide contractors with the flexibility to tailor and implement safety and security programs without excessive federal oversight or overly prescriptive departmental requirements.” We are currently reviewing the reform of DOE’s safety directives and the benefits DOE hopes to achieve from this effort for, among others, the House Committee on Energy and Commerce. We expect to issue our report next month. Nevertheless, our prior work has shown that ineffective NNSA oversight of its contractors has contributed to many of the safety and security problems across the nuclear security enterprise and that NNSA faces challenges in sustaining improvements to safety and security performance. NNSA faces a complex task in planning, budgeting, and ensuring the execution of interconnected activities across the nuclear security enterprise. Among other things, maintaining government-owned facilities that were constructed more than 50 years ago and ensuring M&O contractors are sustaining critical human capital skills that are highly technical in nature and limited in supply are difficult undertakings. Over the past decade, we have made numerous recommendations to DOE and NNSA to improve their management and oversight practices. DOE and NNSA have acted on many of these recommendations, and we will continue to monitor progress being made in these areas. In the current era of tight budgets, Congress and the American taxpayer have the right to know whether investments made in the nuclear security enterprise are worth the cost. However, NNSA currently lacks the basic financial information on the total costs to operate and maintain its essential facilities and infrastructure, leaving it unable to identify return on investment or opportunities for cost savings. NNSA is now proposing to spend decades and tens of billions of dollars to modernize the nuclear security enterprise, largely by replacing or refurbishing aging and decaying facilities at its sites across the United States. Given NNSA’s record of weak management of its major projects, we believe that careful federal oversight will be critical to ensure this time and money are spent in as an effective and efficient manner as possible. With regard to the concerns that DOE’s and NNSA’s oversight of the laboratories’ activities have been excessive and that safety and security requirements are overly prescriptive and burdensome, we agree that excessive oversight and micromanagement of contractors’ activities is not an efficient use of scarce federal resources. Nevertheless, in our view, the problems we continue to identify in the nuclear security enterprise are not caused by excessive oversight, but instead result from ineffective oversight. Given the critical nature of the work the nuclear security enterprise performs and the high-hazard operations it conducts—often involving extremely hazardous materials, such as plutonium and highly enriched uranium, that must be stored under high security to protect them from theft—careful oversight and stringent safety and security requirements will always be required at these sites It is also important in an era of scarce resources that DOE and NNSA ensure that the work conducted by the nuclear security enterprise is primarily focused on its principal mission—ensuring the safety and reliability of the nuclear weapons stockpile. DOE has other national laboratories capable of conducting valuable scientific research on issues as wide-ranging as climate change or high-energy physics, but there is no substitute for the sophisticated capabilities and highly-skilled human capital present in the nuclear security enterprise for ensuring the credibility of the U.S. nuclear deterrent. Chairman Turner, Ranking Member Sanchez, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Allison Bawden, Ryan T. Coles, and Jonathan Gill, Assistant Directors, and Patrick Bernard, Senior Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Nuclear Security Administration (NNSA), a separately organized agency within the Department of Energy (DOE), is responsible for managing its contractors nuclear weapon- and nonproliferation-related national security activities in laboratories and other facilities, collectively known as the nuclear security enterprise. GAO designated DOEs management of its contracts as an area at high risk of fraud, waste, and abuse. Progress has been made, but GAO continues to identify problems across the nuclear security enterprise, from projects cost and schedule overruns to inadequate oversight of safety and security at NNSAs sites. Laboratory and other officials have raised concerns that federal oversight of the laboratories activities has been excessive. With NNSA proposing to spend tens of billions of dollars to modernize the nuclear security enterprise, it is important to ensure scarce resources are spent in an effective and efficient manner. This testimony addresses (1) NNSAs ability to produce budget and cost data necessary to make informed management decisions, (2) improving NNSAs project and contract management, and (3) DOEs and NNSAs safety and security oversight. It is based on prior GAO reports issued from August 2000 to January 2012. DOE and NNSA continue to act on the numerous recommendations GAO has made in improving budget and cost data, project and contract management, and safety and security oversight. GAO will continue to monitor DOEs and NNSAs implementation of these recommendations. NNSA has successfully ensured that the nuclear weapons stockpile remains safe and reliable in the absence of underground nuclear testing, accomplishing this complicated task by using state-of-the-art facilities as well as the skills of top scientists. Nevertheless, NNSA does not have reliable enterprise-wide management information on program budgets and costs, which potentially increases risk to NNSAs programs. For example, in June 2010, GAO reported that NNSA could not identify the total costs to operate and maintain essential weapons activities facilities and infrastructure. In addition, in February 2011, GAO reported that NNSA lacks complete data on, among other things, the condition and value of its existing infrastructure, cost estimates and completion dates for planned capital improvement projects, and critical human capital skills in its contractor workforce that are needed for its programs. As a result, NNSA does not have a sound basis for making decisions on how to most effectively manage its portfolio of projects and other programs and lacks information that could help justify future budget requests or target cost savings opportunities. NNSA recognizes that its ability to make informed decisions is hampered and is taking steps to improve its budget and cost data. For more than a decade and in numerous reports, GAO found that NNSA has continued to experience significant cost and schedule overruns on its major projects. For example, in 2000 and 2009, respectively, GAO reported that NNSAs efforts to extend the operational lives of nuclear weapons in the stockpile have experienced cost increases and schedule delays, such as a $300 million cost increase and 2-year delay in the refurbishment of one warhead and a nearly $70 million increase and 1-year delay in the refurbishment of another warhead. NNSAs construction projects have also experienced cost overruns. For example, GAO reported that the cost to construct a modern Uranium Processing Facility at NNSAs Y-12 National Security Complex experienced a nearly seven-fold cost increase from between $600 million and $1.1 billion in 2004 to between $4.2 billion and $6.5 billion in 2011. Given NNSAs record of weak management of major projects, GAO believes careful federal oversight of NNSAs modernization of the nuclear security enterprise will be critical to ensure that resources are spent in as an effective and efficient manner as possible. NNSAs oversight of safety and security in the nuclear security enterprise has also been questioned. As work carried out at NNSAs sites involves dangerous nuclear materials such as plutonium and highly enriched uranium, stringent safety procedures and security requirements must be observed. GAO reported in 2008 on numerous safety and security problems across NNSAs sites, contributing, among other things, to the temporary shutdown of facilities at both Los Alamos and Lawrence Livermore National Laboratories in 2004 and 2005, respectively. Ineffective NNSA oversight of its contractors activities contributed to many of these incidents as well as relatively lax laboratory attitudes toward safety procedures. In many cases, NNSA has made improvements to resolve these safety and security concerns, but better oversight is needed to ensure that improvements are fully implemented and sustained. GAO agrees that excessive oversight and micromanagement of contractors activities are not an efficient use of scarce federal resources, but that NNSAs problems are not caused by excessive oversight but instead result from ineffective departmental oversight. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In the absence of international cash donation management policies, procedures, and plans, DOS developed an ad hoc process to manage the cash donations flowing to the U.S. government from other countries for Hurricane Katrina relief efforts. By September 21, about $115 million had been received and as of December 31, 2005, DOS reported that $126 million had been donated by 36 countries. Our review noted that DOS’s ad hoc procedures did ensure the proper recording of international cash donations and we were able to reconcile the funds received with those held in the designated DOS account at Treasury. Also, an NSC-led interagency working group was established to determine uses for the international cash donations for domestic disaster relief. In October 2005, $66 million of the $126 million donated had been accepted by FEMA under the Stafford Act and used for a Hurricane Katrina relief grant. As of March 16, 2006, the other $60 million from international donations remained undistributed. Once accepted by FEMA under the Stafford Act, funds would be limited to use on activities in furtherance of the act. We were told that the NSC-led interagency working group did not transfer the funds to FEMA because it wanted to retain the flexibility to spend the donated funds on a wider range of assistance than is permitted under the Stafford Act. During this period and while deliberations were ongoing, the funds were kept in an account that did not pay interest, thereby diminishing the purchasing power of the donated funds and losing an opportunity to maximize the resources available for relief. Under the Stafford Act, FEMA could have held the funds in an account that can pay interest, but Treasury lacks the statutory authority to credit DOS-held funds with interest. A number of options could be considered to address this situation if there are dual goals of flexibility and maintaining purchasing power. Table 1 below shows the dates of key events in the receipt and distribution of the international cash donations according to documentation received and interviews with DOS and FEMA officials. In early September 2005, FEMA officials identified an account at the U.S. Treasury for recording international cash donations and a number of potential uses for the donations that would help meet relief needs of the disaster. By September 21, 2005, about $115 million in foreign cash donations had been received. In a paper submitted to the NSC-led interagency working group, dated September 22, 2005, DOS recognized that every effort should be made to disburse the funds to provide swift and meaningful relief to Hurricane Katrina victims without compromising needed internal controls to ensure proper management and effective use of the cash donations and transparency. FEMA officials told us that on September 23, 2005, they had identified and proposed to the NSC-led interagency working group that the international cash donations could be spent on the following items for individuals and families affected by Hurricane Katrina: social services assistance, medical transportation, adapting homes for medical and handicap needs, job training and education, living expenses, building materials, furniture, and transportation. At NSC’s request, on October 7, 2005 FEMA presented more detailed proposals for using the foreign donations. On October 20, 2005, with the NSC-led interagency working group consensus, DOS transferred to FEMA $66 million of the international donations to finance case management services to help up to 100,000 households affected by Hurricane Katrina define what their needs are and to obtain available assistance. As of February 2006, the remaining $60 million had not been released, pending the NSC-led interagency working group determination about the acceptance and use of the remaining funds. DOS and FEMA officials told us that for the remaining $60 million in donated funds, the NSC-led interagency working group had considered a series of proposals received from a number of both public and private entities. At the time of our review, we were told that the NSC-led interagency working group decided that the vital needs of schools in the Gulf Coast area would be an appropriate place to apply the donations, and that they were working with the Department of Education to finalize arrangements to provide funding to meet those needs. FEMA officials told us that under the Stafford Act, they could use donated funds for projects such as rebuilding schools, but projects for new schools buildings are not consistent with Stafford Act purposes unless replacing a damaged one. Also, according to DHS officials, the Act would have required that receiving entities match FEMA funds for these purposes. However, because of the devastation, the entities would have difficulty matching FEMA funds, which in essence limited FEMA from doing these types of projects. According to DHS, FEMA considered whether it would be useful for donated funds to contribute to the non-federal share for applicants having trouble meeting the non-federal share, but would need legislative authority to use it to match federal funds. We contacted NSC to further discuss these matters; however NSC did not respond to our requests for a meeting. On March 16, 2006, DOS and the Department of Education signed a Memorandum of Agreement regarding the use of $60 million of the international cash donations. Advance planning is very important given the outstanding pledges of $400 million or more that DOS officials indicated may still be received. While acknowledging that the U.S. government has never previously had occasion to accept such large amounts of international donations for disaster relief, going forward, advance planning is a useful tool to identify potential programs and projects prior to the occurrence of an event of such magnitude. In the case of Hurricane Katrina, while the NSC-led interagency working group reviewed various proposals on the use of the remaining $60 million, DOS held the funds in an account at the U.S. Treasury that did not earn interest. Treasury lacks the statutory authority to credit those DOS-held funds with interest. For the time the funds were not used, their purchasing power diminished due to inflation. If these funds had been placed in an account that could have been credited with interest to offset the erosion of purchasing power, the amount of funds available for relief and recovery efforts would have increased while decision makers determined how to use them. The U.S. government would be responsible for paying the interest if these funds were held in an account at the Treasury that can pay interest. Although the Stafford Act does not apply to the donated funds maintained in the DOS account at Treasury, the Stafford Act does provide that excess funds accepted under the Act may be placed in Treasury securities, and the related interest paid on such investments would be credited to the account. Had the foreign monetary donations been placed in Treasury securities, we estimate that by February 23, 2006, the remaining funds for relief efforts would have increased by nearly $1 million. The Administration’s report, The Federal Response To Hurricane Katrina: Lessons Learned, released on February 23, 2006, recognized that there was no pre-established plan for handling international donations and that implementation of the procedures developed was a slow and often frustrating process. The report includes recommendations that DOS should establish before June 1, 2006, an interagency process to determine appropriate uses of international cash donations, and ensure timely use of these funds in a transparent and accountable manner, among others. DOS officials recognized that the ad hoc process needed to be formalized and planned to develop such procedures by June 1, 2006. When developing policies and procedures, it is important that consideration also be given to strategies that can help maintain the purchasing power of the international donations. If the goal is to maintain both purchasing power and flexibility, then among the options to consider are seeking statutory authority for DOS to record funds in a Treasury account that can pay interest similar to donations accepted under the Stafford Act pending decisions on how the funds would be used, or to allow DOS to deposit the funds in an existing Treasury account of another agency that can pay interest pending decisions on how the funds would be used. In the absence of guidance, we found a lack of accountability in the management of the in-kind assistance. Specifically, FEMA did not have a process in place that confirmed that the in-kind assistance sent to distribution sites was received. The lack of guidance, inadequate information about the nature and content of foreign offers of in-kind assistance, and insufficient advance coordination also resulted in the arrival of food and medical assistance that could not be used in the United States. Also, the ad hoc procedures created to manage foreign military donations allowed for confusion about which agency—FEMA or DOD— should accept and be responsible for oversight of such donations. Because of the lack of guidance to track assistance, USAID/OFDA created a database to track the assistance as it arrived. We found that USAID/OFDA reasonably accounted for the assistance given the lack of information on the manifests and the amount of assistance that was arriving within a short time. However, on September 14, 2005, FEMA did request USAID/OFDA to track the assistance from receipt to final disposition. However, the system USAID/OFDA created did not include confirming that the assistance was received at the FEMA distribution sites. In part, USAID/OFDA did not set up these procedures on its own in this situation, because its mission is to deliver assistance in foreign countries and it had never distributed assistance within the United States. FEMA officials told us that they assumed USAID/OFDA had these controls in place. FEMA and USAID/OFDA officials could not provide us with evidence that confirmed that the assistance sent to distribution sites was received. Without these controls in place to ensure accountability for the assistance, FEMA does not know if all or part of these donations were received at FEMA distribution sites. Internal controls, such as a system to track that shipments are received at intended destinations, provides an agency with oversight, and for FEMA in this case, they help ensure that international donations are received at FEMA destination sites. We noted that the guidance the agencies created did not include policies and procedures to help ensure that food and medical supplies that the U.S. government agreed to receive and came into the United States met U.S. standards. The lack of guidance, inadequate information up-front about the nature and content of foreign offers of in-kind assistance, and insufficient advance coordination with regulatory agencies before agreeing to receive them, resulted in food and medical items, such as MREs and medical supplies, that came into the United States even though they did not meet USDA or FDA standards and thus could not be distributed in the United States. We noted that FEMA’s list of items that could be used for disaster relief that was provided to DOS was very general and did not provide any exceptions, for example, about contents of MREs. DHS commented on our report that FEMA repeatedly requested from DOS additional information about the foreign items being offered and DOS did not respond. Both instances represent lost opportunities to have prevented the arrival of items that could not be distributed in the United States. The food items included MREs from five countries. Because of the magnitude of the disaster, some normal operating procedures governing the import of goods were waived. According to USDA and FDA officials, under normal procedures, entry documents containing specific information, which are filed with U.S. Customs and Border Protection, are transmitted to USDA and FDA for those agencies’ use in determining if the commodities are appropriately admissible into the United States. Without consultation or prior notification to USDA or FDA, the Commissioner of U.S. Customs and Border Protection authorized suspension of some normal operating procedures for the import of regulated items like food and medical supplies. Consequently, USDA and FDA had no involvement in the decision making or process of agreeing to receive regulated product donations, including MREs and medical supplies, and no opportunity to ensure that they would all be acceptable for distribution before the donated goods arrived. Both USDA and FDA, based on regulations intended to protect public health, prevented distribution of some international donations, which resulted in the assistance being stored at a cost of about $80,000. In the absence of policies and procedures, DOS, FEMA, and DOD created ad hoc policies and procedures to manage the receipt and distribution of foreign military goods and services. However, this guidance left open which agency—FEMA or DOD—was to formally accept the foreign military assistance and therefore each agency apparently assumed the other had done so under their respective gift authorities. As a result, it is unclear whether FEMA or DOD accepted or maintained oversight of the foreign military donations that were vetted through the DOS Task Force. The offers of foreign military assistance included, for example, the use of amphibious ships and diver salvage teams. FEMA did not maintain oversight of the foreign military donations that it accepted through the DOS task force. A FEMA official told us that they were unable to tell us how the foreign military donations were used because FEMA could not match the use of the donations with mission assignments it gave Northern Command. Moreover, FEMA and Northern Command officials told us of instances in which foreign military donations arrived in the United States that were not vetted through the DOS task force. For example, we were told of military MREs that were shipped to a military base and distributed directly to hurricane victims. For the shipments that were not vetted through the Task Force, DOS, FEMA, and DOD officials could not provide us information on the type, amount, or use of the items. As a result, the agencies cannot determine if these items of assistance were safeguarded and used as intended. In closing, since the U.S. government had never before received such substantial amounts of international disaster assistance, we recognize that DOS, FEMA, USAID/OFDA, and DOD created ad hoc procedures to manage the receipt, acceptance, and distribution of the assistance as best they could. Going forward, it will be important to have in place clear policies, procedures, and plans on managing and using both cash and in- kind donations in a manner that provides accountability and transparency. If properly implemented, the six recommendations included in our report issued today will help to ensure that the cognizant agencies fulfill their responsibilities to effectively manage and maintain appropriate and adequate internal control over foreign donations. Mr. Chairman, this concludes GAO’s prepared statement. We would be happy to respond to any questions that you or Members of the Committee may have. For further information on this testimony, please contact either Davi M. D’Agostino at (202) 512-5431 or [email protected] or McCoy Williams at (202) 512-9095 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this testimony included Kay Daly, Lorelei St. James, Jay Spaan, Pamela Valentine, and Leonard Zapata. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In response to Hurricane Katrina, countries and organizations donated to the United States government cash and in-kind donations, including foreign military assistance. The National Response Plan establishes that the Department of State (DOS) is the coordinator of all offers of international assistance. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security (DHS) is responsible for accepting the assistance and coordinating its distribution. GAO's testimony covers (1) the amount and use of internationally donated cash and (2) the extent to which federal agencies with responsibilities for international in-kind assistance offered to the United States had policies and procedures to ensure the appropriate accountability for the acceptance and distribution of that assistance. Because the U.S. government had not received such substantial amounts of international disaster assistance before, ad hoc procedures were developed to accept, receive and distribute the cash and in-kind assistance. Understandably, not all procedures would be in place at the outset to provide a higher level of accountability. The Administration recognized the need for improvement in its recent report on lessons learned from Hurricane Katrina. GAO was able to track the cash donations received to designated U.S. Treasury accounts or disbursed. In the absence of policies, procedures, and plans, DOS developed an ad hoc process to manage $126 million in foreign cash donations to the U.S. government for Hurricane Katrina relief efforts. As cash donations arrived, a National Security Council (NSC)-led interagency working group was convened to make policy decisions about the use of the funds. FEMA officials told GAO they had identified and presented to the working group a number of items that the donated funds could be spent on. The NSC-led interagency working group determined that use of those donated funds, once accepted by FEMA under the Stafford Act, would be more limited than the wider range of possible uses available if the funds were held and then accepted under the gift authorities of other agencies. In October 2005, $66 million of the donated funds were spent on a FEMA case management grant, and as of March 16, 2006, $60 million remained undistributed in the DOS-designated account at the Treasury that did not pay interest. Treasury may pay interest on funds accepted by FEMA under the Stafford Act. According to DOS, an additional $400 million in international cash donations are likely to arrive. It is important that cash management policies and spending plan options are considered and in place to deal with the forthcoming donations so that the purchasing power of the donated cash is maintained for relief and reconstruction. FEMA and other agencies did not have policies and procedures in place to ensure the proper acceptance and distribution of in-kind assistance donated by foreign countries and militaries. In-kind donations included food and clothing. FEMA and other agencies established ad hoc procedures. However, in the distribution of the assistance to FEMA sites, GAO found that no agency tracked and confirmed that the assistance arrived at their destinations. Also, lack of procedures, inadequate information up front about the donations, and insufficient coordination resulted in the U.S. government agreeing to receive food and medical items that were unsuitable for use in the United States and storage costs of about $80,000. The procedures also allowed confusion about which agency was to accept and provide oversight of foreign military donations. DOD's lack of internal guidance regarding the DOS coordinating process resulted in some foreign military donations that arrived without DOS, FEMA, or DOD oversight. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Surveillance of foodborne diseases allows public health officials to recognize trends, detect outbreaks, pinpoint the causes of these outbreaks, and develop effective prevention and control measures. Such surveillance presents a complex challenge. Many foods today are imported, prepared and/or eaten outside the home, and widely distributed after processing. As a result, an outbreak of foodborne disease can involve people in different localities, states, and even countries. The number and diversity of foodborne diseases further complicate surveillance. Although many of the more well-known foodborne pathogens are bacteria, such as E. coli O157:H7 and Salmonella, foodborne diseases are caused by a variety of other pathogens, including viruses, parasites, and toxins. Some of these diseases also can be transmitted by nonfood sources, such as through water or through person-to-person contact. Appendix II describes the major foodborne diseases currently under national surveillance. The surveillance process usually begins when a person with a foodborne disease seeks medical care. To help determine the cause of the patient’s illness, a physician may rely on a laboratory test, which could be performed in the physician’s own office, a hospital, an independent clinical laboratory, or a public health laboratory. If the test shows that the patient is ill with a disease (including a foodborne disease) that must be reported under state law, or if the physician diagnoses the disease without the use of a test, the cases are usually reported to the local health department. Health department staff collect these reports, check them for completeness, contact health-care professionals to obtain missing information or clarify unclear responses, and forward them to state health agencies. Staff resources devoted to disease reporting vary with the overall size and mission of the health department. Because nearly half of local health agencies have jurisdiction over a population of fewer than 25,000, many cannot support a large, specialized staff to work on disease reporting. The states have principal responsibility for protecting the public’s health and therefore take the lead in conducting surveillance. In state health departments, epidemiologists analyze the data reported and decide when and how to supplement passive reporting with active surveillance methods, conduct outbreak and other disease investigations, and design and evaluate disease prevention and control efforts. They also transmit state data to CDC, providing routine reporting on selected diseases. Surveillance data are transmitted to CDC both electronically and using paper-based systems. Information about individual cases of disease is reported through two electronic systems. The National Electronic Telecommunications System for Surveillance collects data submitted by epidemiologists about patient demographics and residences, suspected or confirmed diagnoses, and the dates of disease onset. In contrast, the second system, the Public Health Laboratory Information System, collects more definitive data from public health laboratory officials on pathogens identified by laboratory tests. Both systems also offer disease-specific reporting options that states may use to report additional data to CDC. For some surveillance systems, such as the Viral Hepatitis Surveillance Program, data are submitted to CDC both electronically and using paper forms. For other surveillance systems, such as the Foodborne Disease Outbreak Surveillance System, the data are submitted primarily through paper reporting. CDC officials told us they have an ongoing effort to integrate public heath information collected through these and other systems. They estimate this effort will take several years to complete. Federal participation in the foodborne disease surveillance network focuses on CDC activities—particularly those of the National Center for Infectious Diseases. CDC analyzes the data furnished by states to (1) monitor national health trends, (2) formulate and implement prevention strategies, (3) evaluate state and federal disease prevention efforts, and (4) identify outbreaks that affect multiple jurisdictions, such as more than one state. CDC routinely provides public health officials, medical personnel, and others information on disease trends and analyses of outbreaks. In fiscal year 2000, CDC’s budget for foodborne disease surveillance through the Food Safety Initiative was $29 million. In order to maximize the effectiveness of its surveillance efforts, CDC works with the Council of State and Territorial Epidemiologists, a professional association of public health epidemiologists from each U.S. state and territory. They are responsible for monitoring trends in health and health problems and devising prevention programs that promote the entire community’s health. The council is currently in its eighth year of a cooperative agreement with the CDC and has approximately 15 separate activities on which they work collaboratively with the CDC. CDC also works with the Association of Public Health Laboratories, which links local, state, national, and global health leaders in order to promote the highest quality laboratory practices worldwide. However, regardless of the completeness and comprehensiveness of a surveillance system, it can generally detect only a fraction of disease cases—the tip of the iceberg, at best, as shown in figure 1. Very few people who contract foodborne diseases actually seek treatment, are properly diagnosed, have their diagnoses confirmed through laboratory analysis, and then have their cases reported through the surveillance systems. For example, a recent CDC-sponsored study estimated that 340 million annual episodes of acute diarrheal illness occurred in the United States, but only 7 percent of people who were ill sought treatment. The study further estimated that physicians requested laboratory testing of a stool culture for only 22 percent of those patients who sought treatment, which produced about 6 million test results that could be reported. Although federal participation in foodborne disease surveillance focuses on CDC activities, two other federal agencies have a key role in the wider arena of food safety and use surveillance information in their programs. USDA’s Food Safety and Inspection Service is responsible for ensuring that meat, poultry, and processed egg products moving in interstate and foreign commerce are safe. This agency primarily carries out its responsibilities through inspections at meat, poultry, and egg processing plants to ensure that these products are safe, wholesome, and accurately labeled. In addition, the Food and Drug Administration in the Department of Health and Human Services is responsible for ensuring that all other domestic and imported food products are safe. Unlike the USDA, the Food and Drug Administration, by and large, conducts post-market surveillance through domestic inspections and testing of products already in commerce to assure that foods are safe and comply with appropriate standards. This is especially true for imported foods where the surveillance program is primarily post-market testing, because the Federal Food, Drug and Cosmetic Act does not provide explicit inspection authority outside the United States. In addition to their other duties, these two agencies work to remove from the market foods that are implicated in foodborne disease outbreaks. CDC conducts surveillance of foodborne diseases through 20 systems. Four of these—the Foodborne Disease Outbreak Surveillance System, FoodNet, PulseNet, and the Surveillance Outbreak Detection Algorithm— focus on foodborne diseases and cover multiple pathogens. The other 16 either collect data about a variety of diseases, only some of which are foodborne, or focus exclusively on a single foodborne disease. Collectively, these systems provide information to detect and control the spread of foodborne disease. The Foodborne Disease Outbreak Surveillance System collects nationwide information about the occurrence and causes of foodborne outbreaks. This system relies on local health officials to correctly identify, investigate, and report outbreaks to CDC through state public health officials. CDC uses the system to, among other things, compile and periodically report national outbreak data. In 1997, the latest year for which published data are available, states and U. S. territories reported 806 outbreaks to CDC through this system. Furthermore, information from this system can serve as a basis for regulatory and other changes to improve food safety. For example, data from the Foodborne Disease Outbreak Surveillance System has played an important role in documenting the importance of shell eggs as a source of human infection with Salmonella Enteritidis. In response to this data and other reports pointing out the dangers posed by improperly handled eggs, government agencies and the egg industry have taken steps to reduce Salmonella contamination of eggs. These steps include refrigerating eggs during transport from the producer to the consumer, identifying and removing infected laying flocks, diverting eggs from infected flocks to pasteurization facilities, and increasing on-farm quality assurance and sanitation measures. CDC has advised state health departments, hospitals, and nursing homes of specific measures to reduce Salmonella Enteritidis infection, and the USDA tests the breeder flocks that produce egg-laying chickens to ensure that they are free of Salmonella Enteritidis. The Food and Drug Administration has amended its regulations, which now require that all shell eggs in retail establishments be held at a temperature of 45 degrees Fahrenheit or lower and that all egg cartons carry safe-handling instructions to inform consumers about proper storage and cooking of eggs. FoodNet is a surveillance system operating in nine sites selected by CDC on the basis of their capability to conduct active surveillance and because of their geographic location. FoodNet produces a more stable and accurate national estimate than is otherwise available of the frequency and sources of nine foodborne pathogens, hemolytic uremic syndrome (a serious complication of E. coli O157:H7 infection), Guillain-Barre syndrome (a serious complication of Campylobacter infection), and toxoplasmosis. These improved estimates result from the use of active surveillance and additional studies that are not characteristic of CDC’s other foodborne surveillance systems. Public health departments who participate in FoodNet receive funds from CDC to systematically contact laboratories in their geographical areas and solicit incidence data. In 1999, state officials participating in FoodNet contacted each of the more than 300 clinical labs within the FoodNet areas on a regular basis. FoodNet studies include various “case control” studies, which are used to determine factors, such as food preparation or handling practices, that affect the risk of infection by pathogens covered by the system. The studies also examine the association between infections and specific foods. In addition, public health officials that participate in FoodNet conduct surveys to identify physician and lab practices that may limit the identification of foodborne diseases. PulseNet is a nationwide network of public health laboratories that perform DNA “fingerprinting” on four types of foodborne bacteria in order to identify and investigate potential outbreaks. The four bacteria fingerprinted by PulseNet—Salmonella, E. coli O157:H7, Listeria, and Shigella—were selected because of their public health importance and the availability of specific “fingerprinting” methods for the pathogens. These four bacteria are either common or have severe symptoms, or both. Public health officials in 46 state and 2 local public health laboratories as well as the food safety laboratories of the USDA and the Food and Drug Administration submit “fingerprint” patterns of bacteria isolated from patients and/or contaminated food to the PulseNet database. The PulseNet network permits rapid comparison of the patterns in the database. Matches may indicate an outbreak. Similar patterns in samples taken from different patients suggest that the bacteria come from a common source, for example, a widely distributed contaminated food product. In addition, strains isolated from food products can be compared with those isolated from ill persons to provide evidence that a specific food caused the disease. By identifying these connections, PulseNet provides critical data for identifying and controlling the source of an outbreak, thus reducing the burden of foodborne disease for the pathogens within the scope of this network. Thirty survey respondents told us that, in the last 3 years, PulseNet had identified a cluster of cases in their state that turned out to be a previously unknown outbreak. In addition, 42 respondents reported that PulseNet helped their state detect and investigate outbreaks of E. coli O157:H7, Salmonella, Listeria, and/or Shigella. Twenty-five of these said that PulseNet greatly helped in this area. In 2000, over 17,000 patterns were submitted to the PulseNet database, and 105 potential outbreaks were identified and investigated. Another system that CDC uses to detect potential foodborne outbreaks is the Surveillance Outbreak Detection Algorithm. In contrast to PulseNet, which uses advanced technology to compare bacterial DNA, the Surveillance Outbreak Detection Algorithm uses statistical analysis to compare currently reported incidence of two common pathogens, Salmonella and Shigella, to a historical baseline in order to detect unusual increases in a specific serotype, such as Salmonella Enteritidis. Such increases may indicate an outbreak. CDC selected Salmonella and Shigella because there are many different serotypes of these organisms, and tracking and comparing the frequency of each serotype was a task well suited for computer analysis. In addition, baseline data for these two pathogens were already available through the National Salmonella Surveillance System and the National Shigella Surveillance System, described below and in appendix III. Beginning in 2002, CDC plans to expand the system to include E. coli O157:H7. Twenty-five of the states that we surveyed told us that in their state, at least once in the last 3 years the Surveillance Outbreak Detection Algorithm had identified a cluster of cases in their state that turned out to be a previously unknown outbreak. In addition to these 4 systems, CDC also has the following 16 systems that either collect information about a number of diseases, only some of which are foodborne, or focus solely on one disease: The Botulism Surveillance System is a national system designed to collect information about all types of botulism, including foodborne. Because every case of foodborne botulism is considered a public health emergency, CDC maintains intensive surveillance for botulism in the United States. The CaliciNet is a network of public health laboratories that perform genetic “fingerprinting” for foodborne viruses, allowing rapid identification and comparison of strains. The Creutzfeldt-Jakob Disease Surveillance Program monitors the occurrence of this disease through periodic review of national cause-of- death data. Surveillance for this disease was enhanced in 1996 to monitor for the possible occurrence of new variant Creutzfeldt-Jakob Disease after this new form of the disease was reported to have possibly resulted from consumption of cattle products contaminated with bovine spongiform encephalopathy (also known as “mad cow” disease). The Epidemic Information Exchange (Epi-X) is a secure Web-based communications network that allows local, state, and federal public health officials to share and discuss outbreak data on a real-time basis. This system can immediately notify health officials of urgent public health events so that they can take appropriate actions. The Escherichia coli O157:H7 Outbreak Surveillance System is a national system established to collect detailed information about risk factors and vehicles of transmission for E. coli infection and is used to inform the public about new vehicles of transmission. The National Antimicrobial Resistance Monitoring System is used to monitor the antimicrobial resistance of certain bacteria that are under surveillance through other systems. The system currently operates in 17 sites throughout the United States. The National Giardiasis Surveillance System includes data from participating states about reported cases of giardiasis—a condition caused by a parasite found in contaminated water or food such as fruits and vegetables. This system began in 1992, when the Council of State and Territorial Epidemiologists assigned giardiasis a code that enabled states to begin voluntarily reporting surveillance data on this disease to CDC electronically. The National Notifiable Diseases Surveillance System is a national system that collects information about 58 diseases, most of which are not considered foodborne, about which regular, frequent, and timely information is considered necessary for their prevention and control. Data from the system are used to analyze disease trends and determine relative disease burdens on a national basis. The National Salmonella Surveillance System is a national system that collects information on the isolates of Salmonella that are serotyped in state public health laboratories, as well as the isolates from food and animals. This system tracks the frequency of more than 500 specific serotypes to determine trends, detect outbreaks, and focus interventions. The system can detect outbreaks either locally or spread out over several jurisdictions. The National Shigella Surveillance System is a national system that collects information on the isolates of Shigella that are serotyped in state public health laboratories. This system tracks the frequency of more than 40 specific serotypes to determine trends, detect outbreaks, and focus interventions. The system can detect outbreaks either locally or spread out over several jurisdictions. The Salmonella Enteritidis Outbreak Surveillance System is a national system designed to track these outbreaks and to collect information on implicated food items and the results of traceback investigations conducted by local agencies and the Food and Drug Administration. The Sentinel Counties Study of Viral Hepatitis is carried out in six U.S. counties to elicit more detailed information on individual hepatitis cases and collect samples for further analyses. The Trichinellosis Surveillance System is a national surveillance system used to monitor long-term trends for this disease. The Typhoid Fever Surveillance System is a national surveillance system for monitoring long-term trends in the epidemiology of typhoid fever in the United States. The system provides information about risk factors that is used in making vaccine recommendations. The Vibrio Surveillance System is composed of two parts: a national system used for reporting cases of Vibrio cholerae (cholera), and another system, which is more geographically limited, that is used for reporting all Vibrio infections. All cases reported to this system are confirmed through laboratory tests by the relevant state or CDC. Surveillance data for this system are used to identify environmental risk factors, retail food outlets where high-risk exposures occur, and target groups that may benefit from consumer education. The Viral Hepatitis Surveillance Program is a national system designed to collect information about acute cases of viral hepatitis: hepatitis A; hepatitis B; and non-A, non-B hepatitis (including hepatitis C). States report basic demographic information for each case, as well as other factors, such as risk-factor information. These data are essential for monitoring trends in the characteristics of the various types of viral hepatitis. Collectively, these surveillance systems provide crucial national data needed to detect and control the spread of foodborne disease. More detailed information about these systems is contained in appendix III, in alphabetical order by system. Public health officials that we contacted said that both untimely release of surveillance data by CDC and the gaps in some of CDC’s data limit the surveillance systems’ usefulness. Some of these problems have resulted from staff shortages at CDC, while others have been caused by shortages of trained epidemiologists and laboratory personnel at state and local health departments. Another contributing factor is that each state decides which diseases it will track and which ones it will not. Therefore, the diseases that are reported to CDC vary from one state to another. In response to these problems, CDC has taken action to address its staff deficiencies and to assist state and local health officials to improve their data collection and reporting abilities. CDC’s actions represent a good first step toward providing public health officials with more timely and complete surveillance data. Delayed dissemination of information from CDC’s foodborne disease surveillance systems has impaired the usefulness of the data. For example, for the Foodborne Disease Outbreak Surveillance System, CDC did not publish outbreak data for the years 1993–1997 until March 2000. CDC officials told us that the late publication of the March 2000 outbreak report was due in part to staff shortages. As of June 2001, data from 1997 was the most recent available from this system. Officials from both the Food and Drug Administration and USDA’s Food Safety and Inspection Service told us that this delay limited the data’s usefulness. In addition, of the 52 respondents to our survey, 26 said that the 3-year lag between the end of the reporting period and the publication of CDC’s March 2000 report diminished the usefulness of the report to their state. Of the 43 survey respondents that used this report, nearly all said that the outbreak data was used as a source of information about foodborne disease trends or to determine associations between pathogens and food. Many survey respondents also told us that more rapid reporting or release of data from FoodNet, PulseNet, and the Surveillance Outbreak Detection Algorithm would improve the systems’ usefulness. For FoodNet, CDC publishes surveillance results annually. However, as of June 2001, CDC had not published any detailed results from its case control studies about the proportion of foodborne disease caused by specific foods or food preparation and handling practices, even though FoodNet has been operational since 1995. CDC officials told us that they had submitted the results of these surveys and studies to professional journals, but the results were never published. For PulseNet, nearly half of the survey respondents said that more rapid analysis of data and more rapid reporting of identified clusters would make the system more useful. In addition, 33 of the respondents said that direct access to the PulseNet database would make the system more useful. For the Surveillance Outbreak Detection Algorithm, 25 of the respondents said that more rapid analysis of state, regional, and national data by CDC would make that system more useful. In addition, 20 respondents said more rapid reporting of clusters by CDC would make the system more useful. CDC officials told us that the late publication of the March 2000 outbreak report was due in part to staff shortages. CDC took action to address this problem when the agency hired four new staff between June 2000 and September 2000 to take on the responsibilities of collecting, verifying, coding, processing, and summarizing the outbreak data in addition to other duties. In the future, CDC plans to release outbreak data annually beginning with 1998 data, instead of aggregating these data over several years. CDC is currently compiling 2001 outbreak data and intends to publish it by the end of 2002. In addition, CDC is developing a system, called the Electronic Foodborne Outbreak Reporting System, which will allow states to electronically transmit reports of foodborne disease outbreaks. Thirty-six survey respondents indicated that this system would increase the timeliness of their initial outbreak reports to CDC. Finally, in November 2000 CDC introduced an electronic bulletin board, known as Epi-X, which allows local, state, and federal public health officials to share outbreak data on a real-time basis. This system can automatically notify health officials of urgent public health events so that they can take appropriate actions. CDC also has plans to provide more rapid reporting or release of data from FoodNet and PulseNet. For FoodNet, CDC officials said they plan to publish by the end of 2001 a number of case control study results that were previously unavailable. For PulseNet, CDC told us it has developed new software that, effective June 30, 2001, gives all participating certified laboratories direct access to the PulseNet database. This allows state officials to query the PulseNet database directly instead of waiting for CDC to send them notice of a new pattern. However, CDC’s ability to disseminate surveillance data in a timely fashion also depends in part on the timeliness of state and local officials’ submittal of the data. For example, for the Foodborne Disease Outbreak Surveillance System, 24 of the survey respondents said they did not report any outbreak data for 2000 until the end of the year or even later. Thus, data could be over a year old before it gets reported to CDC. Similarly, CDC officials also told us that for the Surveillance Outbreak Detection Algorithm, some states report information only quarterly, which is too late to allow CDC to provide early detection of an ongoing outbreak. Because responsibility for surveillance of foodborne diseases rests primarily with the states, states’ reporting of data to CDC is voluntary. To assist in overcoming this problem, CDC is developing a new program known as the National Electronic Disease Surveillance System. This system is intended to facilitate the ready exchange of data between local and state health departments, among states, and among states and CDC. While this may not overcome delayed reporting by the states, it should make information more readily available. In addition, through their Epidemic Intelligence Service program, CDC is training medical doctors, researchers, and scientists, who serve in 2-year assignments, about the needs of both state health departments and CDC. Agency officials said that they hope graduates from the program will understand the value of sharing information in a timely manner and help speed the flow of information into CDC. The completeness of CDC’s data is dependent in large part on the submissions from state and local health officials, which often do not report all cases or all information requested about individual cases. For example, 17 survey respondents told us that not all of the outbreaks in their states were reported to the Foodborne Disease Outbreak Surveillance System. Moreover, for those outbreaks that were reported, 25 survey respondents said the responsible pathogen was identified in only half or fewer of their reports submitted to CDC. Further, as regards the contaminated food item that caused the outbreaks, 28 survey respondents said they identified and reported the responsible food item in half or fewer of their reports. According to FDA and FSIS officials, identifying the responsible pathogen and the contaminated food item is critical for understanding and controlling foodborne disease, and for tracing the cause of the contaminant to its original source. Survey respondents cited several reasons for the gaps in outbreak information sent to CDC. Table 1 summarizes some of the major reasons. As the table shows, the majority of the respondents said shortages of personnel and capacity in state and local health departments, among other things, hinder their ability to detect and investigate foodborne disease outbreaks. A complete listing of conditions that could hinder state and local public health officials is included in our questionnaire results, contained in appendix I. Another cause of incomplete data submissions to the Foodborne Disease Outbreak Surveillance System, as well as to other systems, is the lack of standard disease reporting requirements among states. Each state has a separate list of “reportable” diseases that must be reported to the state health department. The lists vary greatly from state to state because of differences in the extent to which the diseases occur. For example, while 32 survey respondents indicated that health providers in their state are required to notify state or local health departments about cases of cyclosporiasis, 19 said notification was not required. (See app. I for more information on state reporting requirements for a number of foodborne pathogens.) Although states can forward data to CDC about diseases that are not reportable, overall data about such diseases are often incomplete because of deficiencies in reporting by physicians and labs. To improve local and state health officials’ ability to respond to a broad range of public health issues relating to infectious diseases, which include foodborne outbreaks, CDC provides funding to state and local health departments through its Emerging Infections Programs and its Epidemiology and Laboratory Capacity program. Funding for these two programs has increased from $900,000 in 1994 to approximately $50 million in 2001. These programs are designed to address staffing or technology shortages, or both, and will help the states provide CDC with more complete information. For example, states have received grants to significantly increase the capacity of their laboratories. According to CDC officials, now nearly every state has properly trained staff able to use PulseNet technology. To encourage more standardized reporting among the states, CDC consults annually with the Council of State and Territorial Epidemiologists to determine which infectious diseases, including foodborne diseases, are important enough to merit routine reporting to CDC. Officials from CDC told us they have also entered into cooperative agreements with the council and with the Association of Public Health Laboratories to assess the states’ capability and capacity to address public health issues, including foodborne diseases. In commenting on a draft of this report, CDC officials generally agreed with the overall message of the report and provided technical comments to ensure completeness and accuracy. We incorporated these comments into our report as appropriate. CDC comments are presented in appendix IV. To describe CDC’s foodborne disease surveillance systems, we obtained information from CDC on the systems used most often in conducting foodborne disease surveillance activities. We examined each of these systems to identify their use and how they operate. We also discussed the systems’ use and operation with officials from the Food and Drug Administration, USDA’s Food Safety and Inspection Service, the Council of State and Territorial Epidemiologists, the Association of Public Health Laboratories, the National Pork Producers Council, the American Meat Institute, the National Broilers Council, and the Center for Science in the Public Interest. As a result of our initial work, we then directed the remainder of our review effort to four surveillance systems that focus on foodborne disease and that cover more than one pathogen. These four systems were the Foodborne Disease Outbreak Surveillance System, FoodNet, PulseNet, and the Surveillance Outbreak Detection Algorithm. We reviewed extensive literature about each of these four systems and examined the systems’ input and reporting documentation. To identify limitations of these surveillance systems, we sent mail-back questionnaires to officials in the 50 state health departments, as well as in the District of Columbia, and New York City. We pretested this survey in three states to ensure that our questions were clear, unbiased, and precise, and that responding to the survey did not place an undue burden on the health departments. We received completed questionnaires from 100 percent of those surveyed. We discussed limitations identified in the survey with CDC and other federal and state public health officials and with other groups that use foodborne disease surveillance systems. To identify initiatives designed to address these limitations, we met with CDC officials responsible for the surveillance systems and discussed actions they have taken or plan to take to address the limitations. We conducted our review from August 2000 through July 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the congressional committees with jurisdiction over food safety issues; the Secretary of Health and Human Services; the Director, Office of Management and Budget; and other interested parties. We will also provide copies to others on request. If you or your staff have any questions about this report, please call me on (202) 512-3841. Key contributors to this report are listed in appendix V. Appendix II: Major Foodborne Pathogens Under Surveillance by the Centers for Disease Control and Prevention Fever, abdominal cramps, diarrhea (often bloody) Profuse watery diarrhea, vomiting, circulatory collapse, shock Fever, abdominal pain, diarrhea (often bloody) Person-to-person or by contaminated food or water (fecal-oral) Contaminated food or water; person to person; contact with a contaminated item (fecal-oral) The Botulism Surveillance System was established in 1973 to collect detailed information about all types of botulism—foodborne, wound, infant, and child or adult. Because every case of foodborne botulism is considered a public health emergency, CDC maintains intensive surveillance for botulism in the United States. All states except California and Alaska must contact CDC when a case of botulism is suspected, because CDC is the main source of the antitoxin used to treat botulism. As a result, most cases of botulism are reported to CDC immediately. CDC officials follow up on these cases to collect demographic information about the affected individuals, as well as additional information about which foods were involved and their handling and preparation. This information is especially important because the hazardous food may still be available. Geographic Scope: National. Pathogen: Clostridium botulinum. Cases Reported: In 1999, a total of 174 cases were reported to this system, of which 26 were foodborne. CaliciNet, an initiative currently under development, is a network of public health laboratories that uses DNA sequence analysis for “fingerprinting” of foodborne viruses. The network permits rapid comparison of the genetic patterns of foodborne caliciviruses through an electronic sequence database at CDC. Laboratories participating in CaliciNet detect “Norwalk- like” viruses in samples from patients involved in outbreaks of gastroenteritis. Depending on the capabilities in the laboratory, amplification products from positive samples are sequenced locally, sent to a contract laboratory for sequencing, or sent to CDC for confirmatory testing and sequencing. Comparison of newly identified sequences with those in the database may help public health laboratories to identify cases with a common source. Geographic Scope: Thirteen state health departments (California, Florida, Idaho, Iowa, Maryland, Michigan, Minnesota, Missouri, New York, Oregon, Virginia, Washington, and Wisconsin) and the Los Angeles County health department are currently submitting samples for confirmatory testing and genetic analysis. Ten other state health departments (Colorado, Connecticut, Illinois, Nevada, New Hampshire, New Mexico, Ohio, Rhode Island, South Carolina, and Tennessee) are currently undergoing proficiency testing. Pathogens: “Norwalk-like” viruses and “Sapporo-like” viruses. Cases Reported: In 1999, 94 specimens from 9 states were submitted for confirmatory testing and genetic analysis at CDC. CDC monitors the occurrence of Creutzfeldt-Jakob disease through periodic review of national multiple-cause-of-death data. Surveillance for this disease was enhanced in 1996 to monitor for the possible occurrence of new variant Creutzfeldt-Jakob disease after this new form of the disease was reported to have possibly resulted from consumption of cattle products contaminated with bovine spongiform encephalopathy (also known as “mad cow” disease). One enhancement focused on striking differences in the age distribution of new variant Creutzfeldt-Jakob disease cases, for which the median age at death is 28 years, from that of sporadic cases of Creutzfeldt-Jakob disease in the United States, for which the median age at death is 68 years. This enhancement included an ongoing review of the clinical and pathologic records of U.S. victims of Creutzfeldt-Jakob disease under 55 years of age. In addition, in collaboration with the American Association of Neuropathologists, CDC established a National Prion Disease Pathology Surveillance Center to facilitate neuropathologic evaluation of patients suspected of having Creutzfeldt-Jakob disease or other diseases caused by prions. Geographic Scope: National. Pathogens: The agents of Creutzfeldt-Jakob disease and the new variant form of Creutzfeldt-Jakob are believed to be prions. Cases Reported: Between January 1979 and June 2001, over 5,000 U.S. cases of Creutzfeldt-Jakob disease were reported; no evidence of the occurrence of new variant Creutzfeldt-Jakob disease in the United States was detected. The Epidemic Information Exchange, known as Epi-X, is a secure, Web- based communications network for public health officials that simplifies and expedites the exchange of routine and emergency public health information among state and local health departments, CDC, and the U.S. military. CDC recognized that the public health profession had a need for rapid communication, research, and response to widespread food and food-product contamination. After consulting with more than 300 health officials, CDC developed this new system, which enables federal, state, and local epidemiologists, laboratory staff, and other health professionals to quickly notify colleagues of disease outbreaks as they are identified and investigated. The system allows users to compare information on current and past outbreaks through an easily searchable database, discuss a response to the outbreak with colleagues through e-mail, Internet, and telecommunications capabilities, and request epidemiological assistance from CDC on-line. Epi-X is endorsed by the Council of State and Territorial Epidemiologists. Geographic Scope: National. Pathogens: Any pathogen, including bacteria, chemicals, parasites, and viruses (also products or devices). Cases Reported: From November 2000 through August 2001, 153 outbreaks were reported, including 37 foodborne outbreaks. Two health alerts related to foodborne outbreaks of food contamination were issued; over 85 percent of Epi-X users were notified within 30 minutes. The Escherichia coli (E. coli) O157:H7 Outbreak Surveillance System began in 1982, after the first recognized outbreak of this pathogen, and was established to collect detailed information about risk factors and vehicles of transmission for E. coli infection. State health departments are encouraged to report any outbreak of E. coli O157:H7 infection in their state to CDC. Data are collected on outbreaks caused by all sources including food, recreational water, drinking water, animal contact, and person-to-person transmission. E. coli O157:H7 infections can be quite serious and may result in death. Therefore, public health officials at CDC follow up with state health departments on reported outbreaks of E. coli infection to determine their cause and prevent additional spread. Data from this surveillance system are used to inform the public about new vehicles of transmission. Geographic Scope: National. Pathogen: E. coli O157:H7. Cases Reported: In 1999, 38 confirmed outbreaks (causing 1,897 illnesses) were reported to CDC. CDC created the Foodborne Disease Outbreak Surveillance System in 1973 to collect data about cases of foodborne disease that are contracted by two or more patients as a result of ingesting a common food. In the event of such an outbreak, state and local public health department officials provide data to the system about the pathogen that caused the outbreak, the contaminated food that was involved, and contributing factors associated with foodborne disease outbreaks. The data help focus public health actions intended to reduce illnesses and deaths caused by foodborne disease outbreaks. Trend analysis of the data shows whether outbreaks occur seasonally and whether certain foods are more likely to contain pathogens. It also helps public health officials identify critical control points in the path from farm to table that can be monitored to reduce food contamination. However, the data from this system do not always identify the pathogen responsible for a given outbreak; such identification may be hampered by delayed or incomplete laboratory investigation, inadequate laboratory capacity, or inability to recognize a particular pathogen as a cause of foodborne disease. Geographic Scope: All 50 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands. Pathogens: Any pathogen, including bacteria, chemicals, parasites, and viruses. Cases Reported: In 1997, 806 outbreaks were reported to CDC through this system. The Foodborne Diseases Active Surveillance Network, also known as FoodNet, is a collaborative project of the CDC, the USDA, the Food and Drug Administration, and nine sites that gathers information about nine foodborne pathogens, two syndromes, and toxoplasmosis. A significant distinction between FoodNet and other foodborne surveillance systems is that FoodNet participants actively and routinely contact the clinical laboratories in their areas to collect information about the number of cases of each disease covered by this system. For other systems, state and local reporting practices to CDC may not be consistent from state to state. In addition to the active surveillance efforts, FoodNet participants conduct studies and surveys of the physicians, laboratories, and populations within the nine sites. Case control studies are used to determine risk factors, such as food preparation or handling practices, for acquiring infections from the pathogens covered by the system, as well as the association between these infections and specific foods. These studies have been conducted for E. coli O157:H7, Salmonella, Campylobacter, and others. CDC also collects information through population surveys, in which individuals who live in a FoodNet catchment area and were not part of a case control study are surveyed about their consumption of certain foods and how often they see a physician. To determine which tests are typically performed at laboratories in FoodNet areas, CDC administers laboratory surveys. Finally, state officials in the FoodNet areas have administered two physician surveys. The first survey asked physicians to describe actions they take when seeing a patient with a possible foodborne illness, while the second asked how they educate patients about foodborne diseases. FoodNet data can also test the efficacy of interventions designed to reduce the incidence of foodborne pathogens. Geographic Scope: Nine sites consisting of parts or all of the states of California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New York, Oregon, and Tennessee. Pathogens: Nine pathogens—Campylobacter, Cryptosporidium, Cyclospora, E. coli O157:H7, Listeria monocytogenes, Salmonella, Shigella, Vibrio, Yersinia enterocolitica—and hemolytic uremic syndrome (a serious complication of E. coli O157:H7 infection), Guillain- Barre syndrome (a serious complication of Campylobacter infection), and toxoplasmosis. Cases Reported: The number of cases varies by pathogen. The National Antimicrobial Resistance Monitoring System for Enteric Bacteria began in 1996 as a collaborative effort among CDC, the Food and Drug Administration, and USDA. Its purpose is to monitor the resistance of human enteric (intestinal) bacteria. Participating health departments forward some portion of their isolates for six types of bacteria to CDC for susceptibility testing. Susceptibility testing involves determining the sensitivity of the bacteria toward 17 antimicrobial agents that inhibit their growth. Campylobacter isolates are submitted only by the FoodNet sites and are tested against 8 antimicrobial agents instead of 17. Because these data have been collected continually since 1996, trend analyses are possible. This can provide useful information about patterns of emerging resistance, which in turn can guide mitigation efforts. Geographic Scope: Seventeen state and local public health laboratories in California, Colorado, Connecticut, Florida, Georgia, Kansas, Los Angeles County, Maryland, Minnesota, Massachusetts, New Jersey, New York City, New York, Oregon, Tennessee, Washington, and West Virginia participate in this system. Pathogens: Campylobacter, Enterococcus, E. coli O157:H7, Salmonella non-typhoidal, Salmonella typhi, and Shigella. Cases Reported: The number of cases varies by pathogen. The National Giardiasis Surveillance System began in 1992 when the Council of State and Territorial Epidemiologists assigned giardiasis a code that enabled states to voluntarily report giardiasis cases to CDC electronically. For each case, basic information is collected, such as the age, sex, and race of the patient, as well as the place and time of infection. This surveillance system provides data used to educate public health practitioners and health-care providers about the scope and magnitude of giardiasis in the United States. The data can also be used to establish research priorities and to plan future prevention efforts. In June 2001, the Council of State and Territorial Epidemiologists voted to add giardiasis to the list of Nationally Notifiable Diseases. Geographic Scope: Forty-three states, the District of Columbia, New York City, Guam, and Puerto Rico. Pathogen: Giardia intestinalis (also known as Giardia lamblia). Cases Reported: In 1999, over 23,000 cases of giardiasis were reported to CDC through this system. The National Notifiable Diseases Surveillance System collects information about 58 diseases designated as nationally notifiable—that is, diseases about which regular, frequent, and timely information regarding individual cases is considered necessary for their prevention and control. The first annual report on notifiable diseases was published in 1912 for 10 diseases. CDC assumed responsibility for the collection and publication of this data in 1961. The list of nationally notifiable diseases is revised periodically to include emerging pathogens and to delete those whose incidence has declined significantly. CDC also publishes provisional figures for some of these diseases weekly. Policies for reporting notifiable disease cases can vary by disease or reporting jurisdiction, depending on case status classification (i.e., confirmed, probable, or suspect). Reporting of diseases is mandated by legislation or regulation only at the state and local level. Thus, the list of diseases considered notifiable varies slightly by state. Public health officials report basic information for each case, such as age, sex, and race of the patient, as well as the place and time of infection. The data reported in the annual summaries for this system are useful for analyzing disease trends and determining relative disease burdens. Geographic Scope: National. Pathogens/Diseases: Botulism, cholera, cryptosporidiosis, cyclosporiasis, E. coli, hepatitis A, listeriosis, salmonellosis, shigellosis, trichinosis, and typhoid fever (also 47 other pathogens or diseases, which are not considered to be foodborne). Number of Cases Reported: The number of cases varies by disease. The National Salmonella Surveillance System began in 1962 when the Council of State and Territorial Epidemiologists and the Association of Public Health Laboratories agreed that state public health laboratories would routinely test samples of Salmonella to determine their serotype and report the results to CDC. For many years these reports were submitted as paper forms, but for the last 10 years, reporting has been electronic. In addition to the specific serotype, the reports include the age, sex, and county of residence of the person from whom the sample was isolated, the clinical source (such as stool, blood, or abscess), and the date the sample was received in the state laboratory. CDC maintains the national reference laboratory for Salmonella and provides the laboratory reagents and training needed to determine the serotypes. These data are used to identify long-term trends and specific populations at risk for infection, detect and investigate outbreaks, and monitor the effectiveness of prevention efforts. Geographic Scope: All 50 states, New York City, and Guam. Pathogens: Salmonella enterica. Cases Reported: In 1999, approximately 32,750 cases were reported to CDC through this system. The National Shigella Surveillance System began in 1963 when the Council of State and Territorial Epidemiologists and the Association of Public Health Laboratories agreed that state public health laboratories would routinely test samples of Shigella to determine their serotype and report the results to CDC. For many years these reports were submitted as paper forms, but for the last 10 years, reporting has been electronic. In addition to the specific serotype, the reports include the age, sex, and county of residence of the person from whom the sample was isolated, the clinical source (such as stool, blood, or abscess), and the date the sample was received in the state laboratory. CDC maintains the national reference laboratory for Shigella and provides the laboratory reagents and training needed to determine the serotypes. These data are used to identify long- term trends and specific populations at risk for infection, detect and investigate outbreaks, and monitor the effectiveness of prevention efforts. Geographic Scope: All 50 states, New York City, and Guam. Pathogen: Shigella species. Cases Reported: In 1999, approximately 12,000 cases were reported to CDC through this system. PulseNet is a national network of public health laboratories that, since 1996, has been using standardized methods to perform genetic “fingerprinting” of four types of foodborne bacteria. The network permits rapid comparison of the bacteria’s genetic patterns through an electronic database at CDC. Laboratories participating in PulseNet use a method called pulsed-field gel electrophoresis to identify the genetic patterns in bacterial pathogens isolated from patients and from suspected food items. Once the patterns are generated, they are entered into an electronic database of patterns at the state or local health department and transmitted to CDC where they are filed in the PulseNet database. If patterns submitted by laboratories during a defined time period are found to match, CDC will alert the laboratory officials of the match so that a timely investigation can be performed. PulseNet can help public health authorities recognize when cases of foodborne illness occurring at the same time in geographically separate locales are caused by the same strain of bacteria and may be due to a common exposure, such as a food item. An epidemiologic investigation of those cases can then determine what they have in common. If a bacterial pathogen is isolated from a suspected food, the pathogen’s genetic pattern can be quickly compared with the patterns of pathogens isolated from patients. Matching patterns can indicate possible nationwide outbreaks and lead to public health actions such as epidemiologic investigations, product recalls, and long-term prevention measures. Geographic Scope: 46 state and 2 local public health laboratories—New York City and Los Angeles County—and the food safety laboratories of the Food and Drug Administration and USDA. Pathogens: E. coli O157:H7, Salmonella, Listeria, and Shigella. Cases Reported: In 2000, over 17,000 patterns were submitted to the CDC PulseNet database, and 105 potential outbreaks were investigated by state and local officials. The Salmonella Enteritidis Outbreak Surveillance System began in 1985. This passive system collects reports of outbreaks as they occur throughout the calendar year. States are encouraged to report any outbreak of Salmonella Enteritidis infection in their state to CDC. The surveillance system tracks morbidity and mortality associated with outbreaks and collects information on implicated food items and on the results of traceback investigations conducted by local agencies and the Food and Drug Administration. Surveillance data have been used to identify risk factors for Salmonella Enteritidis infection, contaminated food items, and groups that may benefit from education. Geographic Scope: National. Pathogen: Salmonella Enteritidis. Outbreaks Reported: In 1999, 44 confirmed outbreaks of Salmonella Enteritidis were reported, affecting U.S. residents in 17 states. The Sentinel Counties Study of Viral Hepatitis began in 1979 to collect more detailed information on risk factors for cases of acute viral hepatitis and to detect newly emerging viruses. Under contracts with CDC, county health departments collect data for each reported case and a serum sample for each reported case and report the information to CDC. In recent years, data from this system have been used to better characterize hepatitis A epidemiology and to develop molecular subtyping techniques. Geographic Scope: Six counties–Pinellas, Florida; Jefferson, Alabama; Denver, Colorado; Pierce, Washington; Multnomah, Oregon; and San Francisco, California. Pathogens: Hepatitis A; hepatitis B; and non-A, non-B hepatitis (including hepatitis C). Cases Reported: In 1999, 240 cases of hepatitis A, 134 cases of hepatitis B, and 32 cases of non-A, non-B hepatitis (including hepatitis C) were reported to CDC through this system. The Surveillance Outbreak Detection Algorithm was designed to detect unusual clusters of cases of a foodborne disease that indicate a potential outbreak. The algorithm was first used in 1996 for Salmonella cases. The algorithm compares, by serotype, the number of cases reported through the Public Health Laboratory Information System during a given week with a 5-year historical baseline for that serotype and week to detect unusual increases from the baseline. The weekly comparisons are done on a national, regional, and state basis. If they detect any unusual clusters, CDC notifies the affected state(s) by fax. The Surveillance Outbreak Detection Algorithm is useful for identifying multistate outbreaks, especially where individual cases may be quite diffuse. The software also has an interface with which any user can easily generate basic statistical information. The interface also produces graphs and maps to facilitate identification of trends or anomalies. State health departments have access to a limited version of the algorithm via the Public Health Laboratory Information System. Geographic Scope: National. Pathogens: Salmonella and Shigella. Cases Reported: Using the algorithm, CDC officials identified 133 potential Salmonella outbreaks in 1999 and 273 in 2000. The Trichinellosis (Trichinosis) Surveillance System was created in 1947, when the U.S. Public Health Service began collecting statistics on cases of infection at the national level. In 1965, trichinellosis was included among the notifiable diseases that physicians report weekly to state health departments and to CDC through the National Morbidity Reporting System. A standardized surveillance form was developed to collect detailed information for each case. Geographic Scope: National. Pathogen: Trichinella spp. Cases Reported: In 1999, 12 cases were reported to CDC through this system. The Typhoid Fever Surveillance System was established in 1962 to collect detailed information about all cases of Salmonella typhi. State health department officials are asked to complete a typhoid fever surveillance report form when a laboratory confirms a case of typhoid fever. The form collects demographic information about each case, as well as information about patients’ international travel and vaccination history, and the antibiotic susceptibility of isolates. This information is especially important for developing travel advisories, vaccination recommendations, and treatment guidelines. Geographic Scope: National. Pathogen: Salmonella typhi. Cases Reported: In 1999, 115 cases were reported to this system. The Vibrio Surveillance System began in 1988 and is composed of two parts. One is a national passive system for reporting cases of toxigenic Vibrio cholerae infection (cholera), and the other is a more active system that covers all types of Vibrio infections in a more limited geographic area. For the active system, investigators use a standardized form to collect clinical data, information about patients’ underlying illnesses, and epidemiologic data about patients’ seafood consumption and exposure to seawater for the week preceding illness. Surveillance data have been used to identify environmental risk factors, retail food outlets where high-risk exposures occur, and groups that may benefit from consumer education. Geographic Scope: National for the cholera portion of the system; the non-cholera portion of the system initially included only the Gulf Coast states of Alabama, Florida, Louisiana, and Texas but is open to all states and has expanded to include, among others, the FoodNet sites and states along both the East and West coasts. Pathogen: Toxigenic Vibrio cholerae; Vibrio spp. Cases Reported: In 2000, four cases of Vibrio cholerae and 295 laboratory-confirmed cases of other types of Vibrio infections were reported to CDC through this system. To enhance the accuracy and completeness of reporting, CDC requests that participating states verify the information reported twice a year. The Viral Hepatitis Surveillance Program was created in 1961 to collect demographic, clinical, serologic, and risk-factor information on cases of acute viral hepatitis. The data collected through the program are essential for monitoring trends in the epidemiologic characteristics of the various types of viral hepatitis. These data are also valuable for monitoring the effectiveness of prevention programs. Pathogens: Hepatitis A; hepatitis B; non-A, non-B hepatitis (including hepatitis C). Geographic Scope: National. Number of Cases Reported: In 1999, 17,047 cases of hepatitis A, 7,694 cases of hepatitis B, and 3,111 cases of non-A, non-B hepatitis were reported through National Electronic Telecommunication Surveillance System. Information about risk factors was reported through the Viral Hepatitis Surveillance Program for approximately 33 percent of these cases. Source of Data: States report this information to CDC through the extended-record capability of the National Electronic Telecommunication Surveillance System or by submitting a paper form with this information. In addition to those named above, Carolyn Boyce, Cathy Helm, Natalie Herzog, Cynthia Norris, Paul Pansini, and Stuart Ryba made key contributions to this report. | Foodborne diseases in the United States cause an estimated 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths annually, according to the Centers for Disease Control and Prevention (CDC). Surveillance is the most important tool for detecting and monitoring both existing and emerging foodborne diseases. In the United States, surveillance for foodborne disease is also used to identify outbreaks--two or more cases of a similar illness that result from ingestion of a common food--and their causes. CDC has 18 surveillance systems used to detect cases or outbreaks of foodborne disease, pinpoint their cause, recognize trends, and develop effective prevention and control measures. Four principal systems--the Foodborne Disease Outbreak Surveillance System, PulseNet, FoodNet, and the Surveillance Outbreak Detection Algorithm--focus on foodborne diseases and cover more than one pathogen. Although CDC's systems have contributed to food safety, the usefulness of several of these surveillance systems is impaired both by CDC's untimely release of surveillance data and by gaps in the data collection. CDC is providing funds to state and local health departments to address their staffing and technology needs to help the states provide CDC with more complete information. CDC officials have entered into a cooperative agreement with the Association of Public Health Laboratories to assess the states' capability and capacity to address public health issues, including foodborne disease. CDC consults annually with the Council of State and Territorial Epidemiologists to encourage more standardized reporting among states. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 2000, legacy airlines have faced unprecedented internal and external challenges. Internally, the impact of the Internet on how tickets are sold and consumers search for fares and the growth of low cost airlines as a market force accessible to almost every consumer has hurt legacy airline revenues by placing downward pressure on airfares. More recently, airlines’ costs have been hurt by rising fuel prices (see figure 1). This is especially true of airlines that did not have fuel hedging in place. Externally, a series of largely unforeseen events—among them the September 11th terrorist attacks in 2001 and associated security concerns; war in Iraq; the SARS crisis; economic recession beginning in 2001; and a steep decline in business travel—seriously disrupted the demand for air travel during 2001 and 2002. Low fares have constrained revenues for both legacy and low cost airlines. Yields, the amount of revenue airlines collect for every mile a passenger travels, fell for both low cost and legacy airlines from 2000 through 2004 (see figure 2). However, the decline has been greater for legacy airlines than for low cost airlines. During the first quarter of 2005, average yields among both legacy and low cost airlines rose somewhat, although those for legacy airlines still trailed what they were able to earn during the same period in 2004. Legacy airlines, as a group, have been unsuccessful in reducing their costs to become more competitive with low cost airlines. Unit cost competitiveness is key to profitability for airlines because of declining yields. While legacy airlines have been able to reduce their overall costs since 2001, these were largely achieved through capacity reductions and without an improvement in their unit costs. Meanwhile, low cost airlines have been able to maintain low unit costs, primarily by continuing to grow. As a result, low cost airlines have been able to sustain a unit cost advantage as compared to their legacy rivals (see figure 3). In 2004, low cost airlines maintained a 2.7 cent per available seat mile advantage over legacy airlines. This advantage is attributable to lower overall costs and greater labor and asset productivity. During the first quarter of 2005, both legacy and low cost airlines continued to struggle to reduce costs, in part because of the increase in fuel costs. Weak revenues and the inability to realize greater unit cost-savings have combined to produce unprecedented losses for legacy airlines. At the same time, low cost airlines have been able to continue producing modest profits as a result of lower unit costs (see figure 4). Legacy airlines have lost a cumulative $28 billion since 2001 and are predicted to lose another $5 billion in 2005, according to industry analysts. First quarter 2005 operating losses (based on data reported to DOT) approached $1.45 billion for legacy airlines. Low cost airlines also reported net operating losses of almost $0.2 billion, driven primarily by ATA’s losses. Since 2000, as the financial condition of legacy airlines deteriorated, they built cash balances not through operations but by borrowing. Legacy airlines have lost cash from operations and compensated for operating losses by taking on additional debt, relying on creditors for more of their capital needs than in the past. In the process of doing so, several legacy airlines have used all, or nearly all, of their assets as collateral, potentially limiting their future access to capital markets. In sum, airlines are capital and labor intensive firms subject to highly cyclical demand and intense competition. Aircraft are very expensive and require large amounts of debt financing to acquire, resulting in high fixed costs for the industry. Labor is largely unionized and highly specialized, making it expensive and hard to reduce during downturns. Competition in the industry is frequently intense owing to periods of excess capacity, relatively open entry, and the willingness of lenders to provide financing. Finally, demand for air travel is highly cyclical, closely tied to the business cycle. Over the past decade, these structural problems have been exacerbated by the growth in low cost airlines and increasing consumer sensitivity to differences in airfares based on their use of the Internet to purchase tickets. More recently airlines have had to deal with persistently high fuel prices—operating profitability, excluding fuel costs, is as high as it has ever been for the industry. Airlines seek bankruptcy protection for such reasons as severe liquidity pressures, an inability to obtain relief from employees and creditors, and an inability to obtain new financing, according to airline officials and bankruptcy experts. As a result of the structural problems and external shocks previously discussed, there have been 160 total airline bankruptcy filings since deregulation in 1978, including 20 since 2000, according to the Air Transport Association. Some airlines have failed more than once but most filings were by smaller carriers. However, the size of airlines that have been declaring bankruptcy has been increasing. Of the 20 bankruptcy filings since 2000, half of these have been for airlines with more than $100 million in assets, about the same number of filings as in the previous 22 years. Compared to the average failure rate for all types of businesses, airlines have failed more often than other businesses. As figure 5 shows, in some years, airline failures were several times more common than for businesses overall. With very few exceptions, airlines that enter bankruptcy do not emerge from it. Of the 146 airline Chapter 11 reorganization filings since 1979, in only 16 cases are the airlines still in business. Many of the advantages of bankruptcy stem from legal protection afforded the debtor airline from its creditors, but this protection comes at a high cost in loss of control over airline operations and damaged relations with employees, investors, and suppliers, according to airline officials and bankruptcy experts. Contrary to some assertions that bankruptcy protection has led to overcapacity and under pricing that have harmed healthy airlines, we found no evidence that this has occurred either in individual markets or to the industry overall. Such claims have been made for more than a decade. In 1993, for example, a national commission to study airline industry problems cited bankruptcy protection as a cause for the industry’s overcapacity and weakened revenues. More recently, airline executives have cited bankruptcy protection as a reason for industry over capacity and low fares. However, we found no evidence that this had occurred and some evidence to the contrary. First, as illustrated by Figure 6, airline liquidations do not appear to affect the continued growth in total industry capacity. If bankruptcy protection leads to overcapacity as some contend, then liquidation should take capacity out of the market. However, the historical growth of airline industry capacity (as measured by available seat miles, or ASMs) has continued unaffected by major liquidations. Only recessions, which curtail demand for air travel, and the September 11th attack, appear to have caused the airline industry to trim capacity. This trend indicates that other airlines quickly replenish capacity to meet demand. In part, this can be attributed to the fungibility of aircraft and the availability of capital to finance airlines. Similarly, our research does not indicate that the departure or liquidation of a carrier from an individual market necessarily leads to a permanent decline in traffic for that market. We contracted with Intervistas/GA2, an aviation consultant, to examine the cases of six hub cities that experienced the departure or significant withdrawal of service of an airline over the last decade (see table 1). In four of the cases, both local origin-and-destination (i.e., passenger traffic to or from, but not connecting through, the local hub) and total passenger traffic (i.e., local and connecting) increased or changed little because the other airlines expanded their traffic in response. In all but one case, fares either decreased or rose less than 6 percent. We also reviewed numerous other bankruptcy and airline industry studies and spoke to industry analysts to determine what evidence existed with regard to the impact of bankruptcy on the industry. We found two major academic studies that provided empirical data on this issue. Both studies found that airlines under bankruptcy protection did not lower their fares or hurt competitor airlines, as some have contended. A 1995 study found that an airline typically reduced its fares somewhat before entering bankruptcy. However, the study found that other airlines did not lower their fares in response and, more importantly, did not lose passenger traffic to their bankrupt rival and therefore were not harmed by the bankrupt airline. Another study came to a similar conclusion in 2000, this time examining the operating performance of 51 bankrupt firms, including 5 airlines, and their competitors. Rather than examine fares as did the 1995 study, this study examined the operating performance of bankrupt firms and their rivals. This study found that bankrupt firms’ performance deteriorated prior to filing for bankruptcy and that their rivals’ profits also declined during this period. However, once a firm entered bankruptcy, its rivals’ profits recovered. Under current law, legacy airlines’ pension funding requirements are estimated to be a minimum of $10.4 billion from 2005 through 2008. These estimates assume the expiration of the Pension Funding Equity Act (PFEA) at the end of this year. The PFEA permitted airlines to defer the majority of their deficit reduction contributions in 2004 and 2005; if this legislation is allowed to expire it would mean that payments due from legacy airlines will significantly increase in 2006. According to PBGC data, legacy airlines are estimated to owe a minimum of $1.5 billion this year, rising to nearly $2.9billion in 2006, $3.5 billion in 2007, and $2.6 billion in 2008. In contrast, low cost airlines have eschewed defined benefit pension plans and instead use defined contribution (401k-type) plans. However, pension funding obligations are only part of the sizeable amount of debt that carriers face over the near term. The size of legacy airlines’ future fixed obligations, including pensions, relative to their financial position suggests they will have trouble meeting their various financial obligations. Fixed airline obligations (including pensions, long term debt, and capital and operating leases) in each year from 2005 through 2008 are substantial. Legacy airlines carried cash balances of just under $10 billion going into 2005 (see figure 7) and have used cash to fund their operational losses. These airlines fixed obligations are estimated to be over $15 billion in both 2005 and 2006, over $17 billion in 2007, and about $13 billion in 2008. While cash from operations can help fund some of these obligations, continued losses and the size of these obligations put these airlines in a sizable liquidity bind. Fixed obligations in 2008 and beyond will likely increase as payments due in 2006 and 2007 may be pushed out and new obligations are assumed. The enormity of legacy airlines’ future pension funding requirements is attributable to the size of the pension shortfall that has developed since 2000. As recently as 1999, airline pensions were overfunded by $700 million based on Security and Exchange Commission (SEC) filings; by the end of 2004 legacy airlines reported a deficit of $21 billion (see figure 8), despite the termination of the US Airways pilots plan in 2003. Since these filings, the total underfunding has declined to approximately $13.7 billion, due in part to the termination of the United Airline plans and the remaining US Airways plans. The extent of underfunding varies significantly by airline. At the end of 2004, prior to terminating its pension plans, United reported underfunding of $6.4 billion, which represented over 40 percent of United’s total operating revenues in 2004. In contrast, Alaska reported pension underfunding of $303 million at the end of 2004, or 13.5 percent of its operating revenues. Since United terminated its pensions, Delta and Northwest now appear to have the most significant pension funding deficits—over $5 billion and nearly $4 billion respectively—which represent about 35 percent of 2004 operating revenues at each airline. The growth of pension underfunding is attributable to 3 factors. Assets losses and low interest rates. Airline pension asset values dropped nearly 20 percent from 2001 through 2004 along with the decline in the stock market, while future obligations have steadily increased due to declines in the interest rates used to calculate the liabilities of plans. Management and labor union decisions. Pension plans have been funded far less than they could have on a tax deductible basis. PBGC examined 101 cases of airline pension contributions from 1997 through 2002, and found that while the maximum deductible contribution was made in 10 cases, no cash contributions were made in 49 cases where they could have contributed. When airlines did make tax deductible contributions, it was often far less than the maximum permitted. For example, the airlines examined could have contributed a total of $4.2 billion on a tax deductible basis in 2000 alone, but only contributed about $136 million despite recording profits of $4.1 billion (see figure 9). In addition, management and labor have sometimes agreed to salary and benefit increases beyond what could reasonably be afforded. For example, in the spring of 2002, United’s management and mechanics reached a new labor agreement that increased the mechanics’ pension benefit by 45 percent, but the airline declared bankruptcy the following December. Pension funding rules are flawed. Existing laws and regulations governing pension funding and premiums have also contributed to the underfunding of defined benefit pension plans. As a result, financially weak plan sponsors, acting within the law, have not only been able to avoid contributions to their plans, but also increase plan liabilities that are at least partially insured by PBGC. Under current law, reported measures of plan funding have likely overstated the funding levels of pension plans, thereby reducing minimum contribution thresholds for plan sponsors. And when plan sponsors were required to make additional contributions, they often substituted “account credits” for cash contributions, even as the market value of plan assets may have been in decline. Furthermore, the funding rule mechanisms that were designed to improve the condition of poorly funded plans were ineffective. Other lawful plan provisions and amendments, such as lump sum distributions and unfunded benefit increases may also have contributed to deterioration in the funding of certain plans. Finally, the premium structure in PBGC’s single-employer pension insurance program does not encourage better plan funding. The cost to PBGC and participants of defined benefit pension terminations has grown in recent years as the level of pension underfunding has deepened. When Eastern Airlines defaulted on its pension obligations of nearly $1.7 billion in 1991, for example, claims against the insurance program totaled $530 million in underfunded pensions and participants lost $112 million. By comparison, the US Airways and United pension terminations cost PBGC $9.6 billion in combined claims against the insurance program and reduced participants’ benefits by $5.2 billion (see table 2). In recent pension terminations, because of statutory limits active and high salaried employees generally lost more of their promised benefits compared to retirees and low salaried employees. For example, PBGC generally does not guarantee benefits above a certain amount, currently $45,614 annually per participant at age 65. For participants who retire before 65 the benefits guaranteed are even less; participants that retire at age 60 are currently limited to $29,649. Commercial pilots often end up with substantial benefit cuts when their plans are terminated because they generally have high benefit amounts and are also required by FAA to retire at age 60. Far fewer nonpilot retirees are affected by the maximum payout limits. For example, at US Airways fewer than 5 percent of retired mechanics and attendants faced benefit cuts as a result of the pension termination. Tables 3 and 4 summarize the expected cuts in benefits for different groups of United’s active and retired employees. It is important to emphasize that relieving legacy airlines of their defined benefit funding costs will help alleviate immediate liquidity pressures, but does not fix their underlying cost structure problems, which are much greater. Pension costs, while substantial, are only a small portion of legacy airlines’ overall costs. As noted previously in figure 3, the cost of legacy airlines’ defined benefit plans accounted for a 0.4 cent, or 15 percent difference between legacy and low cost airline unit costs. The remaining 85 percent of the unit cost differential between legacy and low cost carriers is attributable to factors other than defined benefits pension plans. Moreover, even if legacy airlines terminated their defined benefit plans it would not fully eliminate this portion of the unit cost differential because, according to labor officials we interviewed, other plans would replace them. While the airline industry was deregulated 27 years ago, the full effect on the airline industry’s structure is only now becoming evident. Dramatic changes in the level and nature of demand for air travel combined with an equally dramatic evolution in how airlines meet that demand have forced a drastic restructuring in the competitive structure of the industry. Excess capacity in the airline industry since 2000 has greatly diminished airlines’ pricing power. Profitability, therefore, depends on which airlines can most effectively compete on cost. This development has allowed inroads for low cost airlines and forced wrenching change upon legacy airlines that had long competed based on a high-cost business model. The historically high number of airline bankruptcies and liquidations is a reflection of the industry’s inherent instability. However, this should not be confused with causing the industry’s instability. There is no clear evidence that bankruptcy has contributed to the industry’s economic ills, including overcapacity and underpricing, and there is some evidence to the contrary. Equally telling is how few airlines that have filed for bankruptcy protection are still doing business. Clearly, bankruptcy has not afforded these companies a special advantage. Bankruptcy has become a means by which some legacy airlines are seeking to shed their costs and become more competitive. However, the termination of pension obligations by United Airlines and US Airways has had substantial and wide-spread effects on the PBGC and thousands of airline employees, retirees, and other beneficiaries. Liquidity problems, including $10.4 billion in near term pension contributions, may force additional legacy airlines to follow suit. Some airlines are seeking legislation to allow more time to fund their pensions. If their plans are frozen so that future liabilities do not continue to grow, allowing an extended payback period may reduce the likelihood that these airlines will file for bankruptcy and terminate their pensions in the coming year. However, unless these airlines can reform their overall cost structures and become more competitive with low cost competition; this will be only a temporary reprieve. This concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact JayEtta Hecker at (202) 512-2834 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Paul Aussendorf, Anne Dilger, Steve Martin, Richard Swayze, and Pamela Vines. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since 2001, the U.S. airline industry has confronted unprecedented financial losses. Two of the nation's largest airlines--United Airlines and US Airways--went into bankruptcy, terminating their pension plans and passing the unfunded liability to the Pension Benefit Guaranty Corporation (PBGC). PBGC's unfunded liability was $9.6 billion; plan participants lost $5.2 billion in benefits. Considerable debate has ensued over airlines' use of bankruptcy protection as a means to continue operations, often for years. Many in the industry and elsewhere have maintained that airlines' use of this approach is harmful to the industry, in that it allows inefficient carriers to reduce ticket prices below those of their competitors. This debate has received even sharper focus with pension defaults. Critics argue that by not having to meet their pension obligations, airlines in bankruptcy have an advantage that may encourage other companies to take the same approach. GAO is completing a report for the Committee due later this year. Today's testimony presents preliminary observations in three areas: (1) the continued financial difficulties faced by legacy airlines, (2) the effect of bankruptcy on the industry and competitors, and (3) the effect of airline pension underfunding on employees, airlines, and the PBGC. U.S. legacy airlines have not been able to reduce their costs sufficiently to profitably compete with low cost airlines that continue to capture market share. Internal and external challenges have fundamentally changed the nature of the industry and forced legacy airlines to restructure themselves financially. The changing demand for air travel and the growth of low cost airlines has kept fares low, forcing these airlines to reduce their costs. They have struggled to do so, however, especially as the cost of jet fuel has jumped. So far, they have been unable to reduce costs to a level with their low-cost rivals. As a result, legacy airlines have continued to lose money--$28 billion since 2001. Although some industry observers have asserted that airlines undergoing bankruptcy reorganization contribute to the industry's financial problems, GAO found no clear evidence that historically airlines in bankruptcy have financially harmed competing airlines. Bankruptcy is endemic to the industry; 160 airlines filed for bankruptcy since deregulation in 1978, including 20 since 2000. Most airlines that entered bankruptcy have not survived. Moreover, despite assertions to the contrary, available evidence does not suggest that airlines in bankruptcy contribute to industry overcapacity or that bankrupt airlines harm competitors by reducing fares below what other airlines are charging. While bankruptcy may not be detrimental to rival airlines, it is detrimental for pension plan participants and the PBGC. The remaining legacy airlines with defined benefit pension plans face over $60 billion in fixed obligations over the next 4 years, including $10.4 billion in pension obligations--more than some of these airlines may be able to afford given continued losses. While cash from operations can help fund some of these obligations, continued losses and the size of these obligations put these airlines in a sizable liquidity bind. Moreover, legacy airlines still face considerable restructuring before they become competitive with low cost airlines. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Justice Assistance Act of 1984 (P.L. 98-473) created OJP to provide federal leadership in developing the nation’s capacity to prevent and control crime, administer justice, and assist crime victims. OJP carries out its responsibilities by providing grants to various organizations, including state and local governments, Indian tribal governments, nonprofit organizations, universities, and private foundations. OJP comprises five bureaus, including BJA, and seven program offices, including VAWO. In fulfilling its mission, BJA provides grants for programs and for training and technical assistance to combat violent and drug-related crime and help improve the criminal justice system. VAWO administers grants to help prevent and stop violence against women, including domestic violence, sexual assault, and stalking. During fiscal years 1995 through 2001, BJA and VAWO awarded about $943 million to fund 700 Byrne and 1,264 VAWO discretionary grants. One of BJA’s major grant programs is the Byrne Program. BJA administers the Byrne program, just as its counterpart, VAWO, administers its programs. Under the Byrne discretionary grants program, BJA provides federal financial assistance to grantees for educational and training programs for criminal justice personnel; for technical assistance to state and local units of government; and for projects that are replicable in more than one jurisdiction nationwide. During fiscal years 1995 through 2001, Byrne discretionary grant programs received appropriations of about $385 million. VAWO was created in 1995 to carry out certain programs created under the Violence Against Women Act of 1994. The Victims of Trafficking and Violence Prevention Act of 2000 reauthorized most of the existing VAWO programs and added new programs. VAWO programs seek to improve criminal justice system responses to domestic violence, sexual assault, and stalking by providing support for law enforcement, prosecution, courts, and victim advocacy programs across the country. During fiscal years 1995 through 2001, VAWO’s five discretionary grant programs that were subject to program evaluation were (1) STOP (Services, Training, Officers, and Prosecutors) Violence Against Indian Women Discretionary Grants, (2) Grants to Encourage Arrest Policies, (3) Rural Domestic Violence and Child Victimization Enforcement Grants, (4) Domestic Violence Victims’ Civil Legal Assistance Grants, and (5) Grants to Combat Violent Crimes Against Women on Campuses. During fiscal years 1995 through 2001, about $505 million was appropriated to these discretionary grant programs. As already mentioned, NIJ is the principal research and development agency within OJP, and its duties include developing, conducting, directing, and supervising Byrne and VAWO discretionary grant program evaluations. Under 42 U.S.C. 3766, NIJ is required to “conduct a reasonable number of comprehensive evaluations” of the Byrne discretionary grant program. In selecting programs for review under section 3766, NIJ is to consider new and innovative approaches, program costs, potential for replication in other areas, and the extent of public awareness and community involvement. According to NIJ officials, the implementation of various types of evaluations, including process and impact evaluations, fulfills this legislative requirement. Although legislation creating VAWO does not require evaluations of the VAWO discretionary grant programs, Justice’s annual appropriations for VAWO during fiscal years 1998 through 2002 included monies for NIJ research and evaluations of violence against women. In addition, Justice has promulgated regulations requiring that NIJ conduct national evaluations of two of VAWO’s discretionary grant programs. As with the Byrne discretionary programs, NIJ is not required by statute or Justice regulation to conduct specific types of program evaluations, such as impact or process evaluations. The Director of NIJ is responsible for making the final decision on which Byrne and VAWO discretionary grant programs to evaluate; this decision is based on the work of NIJ staff in coordination with Byrne or VAWO program officials. Once the decision has been made to evaluate a particular program, NIJ issues a solicitation for proposals for grant funding from potential evaluators. When applications or proposals are received, an external peer review panel comprising members of the research and relevant practitioner communities is convened. Peer review panels identify the strengths, weaknesses, and potential methodologies to be derived from competing proposals. When developing their consensus reviews, peer review panels are to consider the quality and technical merit of the proposal; the likelihood that grant objectives will be met; the capabilities, demonstrated productivity, and experience of the evaluators; and budget constraints. Each written consensus review is reviewed and discussed with partnership agency representatives (e.g., staff from BJA or VAWO). These internal staff reviews and discussions are led by NIJ’s Director of the Office of Research and Evaluation who then presents the peer review consensus reviews, along with agency and partner agency input, to the NIJ Director for consideration and final grant award decisions. The NIJ Director makes the final decision regarding which application to fund. To meet our objectives, we conducted our work at OJP, BJA, VAWO, and NIJ headquarters in Washington, D.C. We reviewed applicable laws and regulations, guidelines, reports, and testimony associated with Byrne and VAWO discretionary grant programs and evaluation activities. In addition, we interviewed responsible OJP, NIJ, BJA, and VAWO officials regarding program evaluations of discretionary grants. As agreed with your offices, we focused on program evaluation activities associated with the Byrne and VAWO discretionary grant programs. In particular, we focused on the program evaluations of discretionary grants that were funded during fiscal years 1995 through 2001. To address our first objective, regarding the number, type, status of completion, and award amount of Byrne and VAWO discretionary grant program evaluations, we interviewed NIJ, BJA, and VAWO officials and obtained information on Byrne and VAWO discretionary grant programs and program evaluations. Because NIJ is responsible for carrying out program evaluations of Byrne and VAWO discretionary grant programs, we also obtained and analyzed NIJ data about specific Byrne and VAWO discretionary grant program evaluations, including information on the number of evaluations as well as the type, cost, source of funding, and stages of implementation of each evaluation for fiscal years 1995 through 2001. We did not independently verify the accuracy or completeness of the data that NIJ provided. To address the second objective, regarding the methodological rigor of the impact evaluation studies of Byrne and VAWO discretionary grant programs during fiscal years 1995 through 2001, we initially identified the impact evaluations from the universe of program evaluations specified by NIJ. We excluded from our analysis any impact evaluations that were in the formative stage of development—that is, the application had been awarded but the methodological design was not yet fully developed. As a result, we reviewed four program evaluations. For the four impact evaluations that we reviewed, we asked NIJ to provide any documentation relevant to the design and implementation of the impact evaluation methodologies, such as the application solicitation, the grantee’s initial and supplemental applications, progress notes, interim reports, requested methodological changes, and any final reports that may have become available during the data collection period. We also provided NIJ with a list of methodological issues to be considered in our review and requested them to submit any additional documentation that addressed these issues. We used a data collection instrument to obtain information systematically about each program being evaluated and about the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined such factors as whether evaluation data were collected before and after program implementation; how program effects were isolated (i.e., the use of nonprogram participant comparison groups or statistical controls); and the appropriateness of sampling and outcome measures. Two of our senior social scientists with training and experience in evaluation research and methodology separately reviewed the evaluation documents and developed their own assessments before meeting jointly to discuss the findings and implications. This was done to promote a grant evaluation review process that was both independent and objective. To obtain information on the approaches that BJA, VAWO, and NIJ used to disseminate program evaluation results, we requested and reviewed, if available, relevant handbooks and guidelines on information dissemination, including, for example, NIJ’s guidelines. We also reviewed BJA, VAWO, and NIJ’s available print and electronic products as related to their proven programs and evaluations, including two NIJ publications about Byrne discretionary programs and their evaluation methodologies and results. We conducted our work between February 2001 and December 2001 in accordance with generally accepted government auditing standards. We requested comments from Justice on a draft of this report in January 2002. The comments are discussed near the end of this letter and are reprinted as appendix III. During fiscal years 1995 through 2001, NIJ awarded about $6 million to carry out five Byrne and five VAWO discretionary grant program evaluations. NIJ awarded evaluation grants using mostly funds transferred from BJA and VAWO. Specifically, of the approximately $1.9 million awarded for one impact and four process evaluations of the Byrne discretionary program, NIJ contributed about $299,000 (16 percent) and BJA contributed about $1.6 million (84 percent). VAWO provided all of the funding (about $4 million) to NIJ for all program evaluations of five VAWO discretionary grant programs. According to NIJ, the five VAWO program evaluations included both impact and process evaluations. Our review of information provided by NIJ showed that 6 of the 10 program evaluations—all 5 VAWO evaluations and 1 Byrne evaluation—included impact evaluations. The remaining four Byrne evaluations were exclusively process evaluations that measured the extent to which the programs were working as intended. As of December 2001, only one of these evaluations, the impact evaluation of the Byrne CAR Program, had been completed. The remaining evaluations were in various stages of implementation. Table 1 lists each of the five Byrne program evaluations and shows whether it was a process or an impact evaluation, its stage of implementation, the amount awarded during fiscal years 1995 through 2001, and the total amount awarded since the evaluation was funded. Table 2 lists each of the five VAWO program evaluations and shows that it was both a process and an impact evaluation, its stage of implementation, and the amount awarded during fiscal years 1995 through 2001, which is the total amount awarded. Our review showed that methodological problems have adversely affected three of the four impact evaluations that have progressed beyond the formative stage. All three VAWO evaluations that we reviewed demonstrated a variety of methodological limitations, raising concerns as to whether the evaluations will produce definitive results. The one Byrne evaluation was well designed and used appropriate data collection and analytic methods. We recognize that impact evaluations, such as the type that NIJ is managing, can encounter difficult design and implementation issues. In the three VAWO evaluations that we reviewed, program variation across sites has added to the complexity of designing the evaluations. Sites could not be shown to be representative of the programs or of particular elements of these programs, thereby limiting the ability to generalize results; the lack of comparison groups hinders the ability to minimize the effects of factors external to the program. Furthermore, data collection and analytical problems compromise the ability of evaluators to draw appropriate conclusions from the results. In addition, peer review committees found methodological problems in two of the three VAWO evaluations that we considered. The four program evaluations are multiyear, multisite impact evaluations. Some program evaluations used a sample of grants, while others used the entire universe of grants. For example, the Grants to Encourage Arrests Policies Program used 6 of the original 130 grantee sites. In contrast, in the Byrne Children at Risk impact evaluation, all five sites participated. As of December 2001, NIJ had already received the impact findings from the Byrne Children at Risk Program evaluation but had not received impact findings from the VAWO discretionary grant program evaluations. An impact evaluation is an inherently difficult task, since the objective is to isolate the effects of a particular program or factor from all other potential contributing programs or factors that could also effect change. Given that the Byrne and VAWO programs are operating in an ever changing, complex environment, measuring the impact of these specific Byrne and VAWO programs can be arduous. For example, in the evaluation of VAWO’s Rural Domestic Violence Program, the evaluator’s responsibility is to demonstrate how the program affected the lives of domestic violence victims and the criminal justice system. Several other programs or factors besides the Rural Domestic Violence Program may be accounting for all or part of the observed changes in victims’ lives and the criminal justice system (e.g., a co-occurring program with similar objectives, new legislation, a local economic downturn, an alcohol abuse treatment program). Distinguishing the effects of the Rural Domestic Violence Program requires use of a rigorous methodological design. All three VAWO programs permitted their grantees broad flexibility in the development of their projects to match the needs of their local communities. According to the Assistant Attorney General, this variation in projects is consistent with the intent of the programs’ authorizing legislation. We recognize that the authorizing legislation provides VAWO the flexibility in designing these programs. Although this flexibility may make sense from a program perspective, the resulting project variation makes it more difficult to design and implement a definitive impact evaluation of the program. Instead of assessing a single, homogeneous program with multiple grantees, the evaluation must assess multiple configurations of a program, thereby making it difficult to generalize about the entire program. Although all of the grantees’ projects under each program being evaluated are intended to achieve the same or similar goals, an aggregate analysis could mask the differences in effectiveness among individual projects and thus not result in information about which configurations of projects work and which do not. The three VAWO programs exemplify this situation. The Arrest Policies Program provided grantees with the flexibility to develop their respective projects within six purpose areas: implementing mandatory arrest or proarrest programs and policies in police departments, tracking domestic violence cases, centralizing and coordinating police domestic violence operations, coordinating computer tracking systems, strengthening legal advocacy services, and educating judges and others about how to handle domestic violence cases. Likewise, the STOP Grants Program encouraged tribal governments to develop and implement culture-specific strategies for responding to violent crimes against Indian women and provide appropriate services for those who are victims of domestic abuse, sexual assault, and stalking. Finally, the Rural Domestic Violence Program was designed to provide sites with the flexibility to develop projects, based on need, with respect to the early identification of, intervention in, and prevention of woman battering and child victimization; with respect to increases in victim safety and access to services; with respect to enhancement of the investigation and prosecution of crimes of domestic violence; and with respect to the development of innovative, comprehensive strategies for fostering community awareness and prevention of domestic abuse. Because participating grant sites emphasized different project configurations, the resulting evaluation may not provide information that could be generalized to a broader implementation of the program. The sites participating in the three VAWO evaluations were not shown to be representative of their programs. Various techniques are available to help evaluators choose representative sites and representative participants within those sites. Random sampling of site and participant selection are ideal, but when this is not feasible, other purposeful sampling methods can be used to help approximate the selection of an appropriate sample (e.g., choosing the sample in such proportions that it reflects the larger population--stratification). At a minimum, purposeful selection can ensure the inclusion of a range of relevant sites. As discussed earlier, in the case of the Arrest Policies Program, six purpose areas were identified in the grant solicitation. The six grantees chosen for participation in the evaluation were not however, selected on the basis of their representativeness of the six purpose areas or the program as a whole. Rather, they were selected on the basis of factors related solely to program “stability;” that is; they were considered likely to receive local funding after the conclusion of federal grant funding, and key personnel would continue to participate in the coordinated program effort. Similarly, the 10 Rural Domestic Violence impact evaluation grantees were not selected for participation on the basis of program representativeness or the specific purpose areas discussed earlier. Rather, sites were selected by the grant evaluator on the basis of “feasibility”; specifically, whether the site would be among those participants equipped to conduct an evaluation. Similarly, the STOP Violence Against Indian Women Program evaluation used 3 of the original 14 project sites for a longitudinal study; these were not shown to be representative of the sites in the overall program. For another phase of the evaluation, the principal investigator indicated that grantee sites were selected to be geographically representative of American Indian communities. While this methodology provides for inclusion of a diversity of Indian tribes in the sample from across the country, geography as a sole criterion does not guarantee representativeness in relation to many other factors. Each of the three VAWO evaluations was designed without comparison groups—a factor that hinders the evaluator’s ability to isolate and minimize external factors that could influence the results of the study. Use of comparison groups is a standard practice employed by evaluators to help determine whether differences between baseline and follow-up results are due to the program under consideration or to some other programs or external factors. For example, as we reported in 1997, to determine whether a drug court program has been effective in reducing criminal recidivism and drug relapse, it is not sufficient to merely determine whether those participating in the drug court program show changes in recidivism and relapse rates. Changes in recidivism and relapse variables between baseline and program completion could be due to other external factors, irrespective of the drug court program (e.g., the state may have developed harsher sentencing procedures for those failing to meet drug court objectives). If, however, the drug court participant group is matched at baseline against another set of individuals, “the comparison group” who are experiencing similar life circumstances but who do not qualify for drug court participation (e.g., because of area of residence), then the comparison group can help in isolating the effects of the drug court program. The contrasting of the two groups in relation to recidivism and relapse can provide an approximate measure of the program’s impact. All three VAWO program impact evaluations lacked comparison groups. One issue addressed in the Arrest Policies Program evaluation, for example, was the impact of the program on the safety and protection of the domestic violence victim. The absence of a comparison group, however, makes it difficult to firmly conclude that change in the safety and protection of participating domestic abuse victims is due to the Arrest Policies Program and not to some other external factors operating in the environment (e.g., economic changes, nonfederal programs such as safe houses for domestically abused women, and church-run support programs). Instead of using comparison groups, the Arrest Policies Program evaluation sought to eliminate potential competing external factors by collecting and analyzing extensive historical and interview data from subjects and by conducting cross-site comparisons; the latter method proved unfeasible. The STOP Violence Against Indian Women Discretionary Grant Program has sought in part, to reduce violent crimes against Indian women by changing professional staff attitudes and behaviors. To do this, some grantees created and developed domestic violence training services for professional staff participating in site activities. Without comparison groups, however, assessing the effect of the STOP training programs is difficult. Attitudes and behaviors may change for myriad reasons unrelated to professional training development initiatives. If a treatment group of professional staff receiving the STOP training had been matched with a comparison group of professional staff that was similar in all ways except receipt of training, there would be greater confidence that positive change could be attributed to the STOP Program. Similarly, the lack of comparison groups in the Rural Domestic Violence evaluation makes it difficult to conclude that a reduction in violence against women and children in rural areas can be attributed entirely, or in part, to the Rural Domestic program. Other external factors may be operating. All three VAWO impact evaluations involved data collection and analytical problems that may affect the validity of the findings and conclusions. For example, we received documentation from NIJ on the STOP Grant Program for Reducing Violence Against Indian Women showing that only 43 percent of 127 grantees returned a mail survey. In addition, only 25 percent of 127 tribes provided victim outcome homicide and hospitalization rates—far less than the percentage needed to draw broad- based conclusions about the intended goal of assessing victim well being. In the Arrest Policies evaluation, NIJ reported that the evaluators experienced difficulty in collecting pre-grant baseline data from multiple sites and the quality of the data was oftentimes inadequate, which hindered their ability to statistically analyze change over time. In addition, evaluators were hindered in several work areas by lack of automated data systems; data were missing, lost, or unavailable; and the ability to conduct detailed analyses of the outcome data was sometimes limited. For the Rural Domestic Violence evaluation, evaluators proposed using some variables (e.g., number and type of awareness messages disseminated to the community each month, identification of barriers to meeting the needs of women and children, and number of police officers who complete a training program on domestic violence) that are normally considered to relate more to a process evaluation than an impact evaluation. NIJ noted that outcome measurement indicators varied by site, complicating the ability to draw generalizations. NIJ further indicated that the evaluation team did not collect baseline data prior to the start of the program, making it difficult to identify change resulting from the program. NIJ does not require applicants to use particular evaluation methodologies. NIJ employs peer review committees in deciding which evaluation proposals to fund. The peer review committees expressed concerns about two of the three VAWO program evaluation proposals (i.e., those for the Arrest Policies and Rural Domestic Violence programs) that were subsequently funded by NIJ. Whereas NIJ funded the Arrest Policies evaluation as a grant, NIJ funded the Rural Domestic Violence evaluation as a cooperative agreement so that NIJ could provide substantial involvement in conducting the evaluation. A peer-review panel and NIJ raised several concerns about the Arrest Policies Program evaluation proposal. These concerns included issues related to site selection, victim interviewee selection and retention in the sample, and the need for additional impact measures and control variables. The grant applicant’s responses to these issues did not remove concerns about the methodological rigor of the application, thus calling into question the ability of the grantee to assess the impact of the Arrest Policies Program. For example, the grantee stated that victim interviewee selection was to be conducted through a quota process and that the sampling would vary by site. This would not allow the evaluators to generalize program results. Also, the evaluators said that they would study communities at different levels of “coordination” when comparison groups were not feasible, but they did not adequately explain (1) how the various levels of coordination would be measured, (2) the procedures used to select the communities compared, and (3) the benefits of using this method as a replacement for comparison groups. NIJ subsequently funded this evaluation, and it is still in progress. A peer review committee for the Rural Domestic Violence and Child Victimization Enforcement Grant Program evaluation also expressed concerns about whether the design of the evaluation application, as proposed, would demonstrate whether the program was working. In its consensus review notes, the peer review committee indicated that the “ability to make generalizations about what works and does not work will be limited.” The peer review committee also warned of outside factors (e.g., unavailability of data, inaccessibility of domestic violence victims) that could imperil the evaluation efforts of the applicant. Based on the peer review committee’s input, NIJ issued the following statement to the applicant: “As a national evaluation of a major programmatic effort we hope to have a research design and products on what is working, what is not working, and why. We are not sure that the proposed design will get us to that point.” We reviewed the grant applicant’s response to NIJ’s concern in its application addendum and found that the overall methodological design was still not discussed in sufficient detail or depth to determine whether the program was working. Although the Deputy Director of NIJ’s Office of Research and Evaluation asserted that this initial application was only for process evaluation funding, our review of available documents showed that the applicant had provided substantial information about both the process and impact evaluation methodologies in the application and addendum. We believe that the methodological rigor of the addendum was not substantially improved over that of the original application. The Deputy Director told us that, given the “daunting challenge faced by the evaluator,” NIJ decided to award the grant as a cooperative agreement. Under this arrangement, NIJ was to have substantial involvement in helping the grantee conduct the program evaluation. The results of that evaluation have not yet been submitted. The evaluator’s draft final report is expected no earlier than April 2002. In contrast to the three VAWO impact evaluations, the Byrne impact evaluation employed methodological design and implementation procedures that met a high standard of methodological rigor, fulfilling each of the criteria indicated above. In part, this may reflect the fact that Byrne’s CAR demonstration program, unlike the VAWO programs, was according to the Assistant Attorney General, intended to test a research hypothesis, and the evaluation was designed accordingly. CAR provided participants with the opportunity to use a limited number of program services (e.g., family services, education services, after-school activities) that were theoretically related to the impact variables and the prevention and reduction of drug use and delinquency. As a result, the evaluation was not complicated by project heterogeneity. All five grantees participated in the evaluation. High-risk youths within those projects were randomly selected from targeted neighborhood schools, providing student representation. Additionally, CAR evaluators chose a matched comparison group of youths with similar life circumstances (e.g., living in distressed neighborhoods and exposed to similar school and family risk factors) and without access to the CAR Program. Finally, no significant data collection implementation problems were associated with the CAR Program. The data were collected at multiple points in time from youths (at baseline, at completion of program, and at one year follow-up) and their caregivers (at baseline and at completion of program). Self-reported findings from youths were supplemented by the collection of more objective data from school, police, and court records on an annual basis, and rigorous test procedures were used to determine whether changes over time were statistically significant. Additionally, CAR’s impact evaluation used control groups, a methodologically rigorous technique not used in the three VAWO evaluations. To further eliminate the effects of external factors, youths in the targeted neighborhood schools were randomly assigned either to the group receiving the CAR Program or to a control group that did not participate in the program. Since the CAR Program group made significant gains over the same-school group and the matched comparison group not participating in the program, there was good reason to conclude that the CAR Program was having a beneficial effect on the targeted audience. Appendix I provides summaries of the four evaluations. Despite great interest in assessing results of OJP’s discretionary grant programs, it can be extremely difficult to design and execute evaluations that will provide definitive information. Our in-depth review of one Byrne and three VAWO impact evaluations that have received funding since fiscal year 1995 has shown that, in some cases, the flexibility that can be beneficial to grantees in tailoring programs to meet their communities’ needs has added to the complexities of designing impact evaluations that will result in valid findings. Furthermore, the lack of site representativeness, appropriate comparison groups, and problems in data collection and analysis may compromise the reliability and validity of some of these evaluations. We recognize that not all evaluation issues that can compromise results are easily resolvable, including issues involving comparison groups and data collection. To the extent that methodological design and implementation issues can be overcome, however, the validity of the evaluation results will be enhanced. NIJ spends millions of dollars annually to evaluate OJP grant programs. More up-front attention to the methodological rigor of these evaluations will increase the likelihood that they will produce meaningful results for policymakers. Unfortunately, the problematic evaluation grants that we reviewed are too far along to be radically changed. However, two of the VAWO evaluation grants are still in the formative stage; more NIJ attention to their methodologies now can better ensure useable results. We recommend that the Attorney General instruct the Director of NIJ to assess the two VAWO impact evaluations that are in the formative stage to address any potential methodological design and implementation problems and, on the basis of that assessment, initiate any needed interventions to help ensure that the evaluations produce definitive results. We further recommend that the Attorney General instruct the Director of NIJ to assess its evaluation process with the purpose of developing approaches to ensure that future impact evaluation studies are effectively designed and implemented so as to produce definitive results. We provided a copy of a draft of this report to the Attorney General for review and comment. In a February 13, 2002, letter, the Assistant Attorney General commented on the draft. Her comments are summarized below and presented in their entirety in appendix III. The Assistant Attorney General agreed with the substance of our recommendations and said that NIJ has begun, or plans to take steps, to address them. Although it is still too early to tell whether NIJ’s actions will be effective in preventing or resolving the problems we identified, they appear to be steps in the right direction. With regard to our first recommendation—that NIJ assess the two VAWO impact evaluations in the formative stage to address any potential design and implementation problems and initiate any needed intervention to help ensure definitive results—the Assistant Attorney General noted that NIJ has begun work to ensure that these projects will provide the most useful information possible. She said that for the Crimes Against Women on Campus Program evaluation, NIJ is considering whether it will be possible to conduct an impact evaluation and, if so, how it can enhance its methodological rigor with the resources available. For the Civil Legal Assistance Program evaluation, the Assistant Attorney General said that NIJ is working with the grantee to review site selection procedures for the second phase of the study to enhance the representativeness of sites. The Assistant Attorney General was silent about any additional steps that NIJ would take during the later stages of the Civil Legal Assistance Program process evaluation to ensure the methodological rigor of the impact phase of the study. However, it seems likely that as the process evaluation phase of the study continues, NIJ may be able to take advantage of additional opportunities to address any potential design and implementation problems. With regard to our second recommendation—that NIJ assess its evaluation process to develop approaches to ensure that future evaluation studies are effectively designed and implemented to produce definitive results—the Assistant Attorney General stated that OJP has made program evaluation, including impact evaluations of federally funded programs, a high priority. The Assistant Attorney General said that NIJ has already launched an examination of NIJ’s evaluation process. She also noted that, as part of its reorganization, OJP plans to measurably strengthen NIJ’s capacity to manage impact evaluations with the goal of making them more useful for Congress and others. She noted as an example that OJP and NIJ are building measurement requirements into grants at the outset, requiring potential grantees to collect baseline data and track the follow-up data through the life of the grant. We have not examined OJP’s plans for reorganizing, nor do we have a basis for determining whether OJP’s plans regarding NIJ would strengthen NIJ’s capacity to manage evaluations. However, we believe that NIJ and its key stakeholders, such as Congress and the research community, would be well served if NIJ were to assess what additional actions it could take to strengthen its management of impact evaluations regardless of any reorganization plans. In her letter, the Assistant Attorney General pointed out that the report accurately describes many of the challenges facing evaluators when conducting research in the complex environment of criminal justice programs and interventions. However, she stated that the report could have gone further in acknowledging these challenges. The Assistant Attorney General also stated that the report contrasts the Byrne evaluation with the three VAWO evaluations and obscures important programmatic differences that affect an evaluator’s ability to achieve “GAO’s conditions for methodological rigor.” She pointed out that the Byrne CAR Program was intended to test a research hypothesis and that the evaluation was designed accordingly, i.e., the availability of baseline data were ensured; randomization of effects were stipulated as a precondition of participation; and outcome measures were determined in advance on the basis of the theories to be tested. She further stated that, in contrast, all of the VAWO programs were (1) highly flexible funding streams, in keeping with the intention of Congress, that resulted in substantial heterogeneity at the local level and (2) well into implementation before the evaluation started. The Assistant Attorney General went on to say that it is OJP’s belief that evaluations under less than optimal conditions can provide valuable information about the likely impact of a program, even though the conditions for methodological strategies and overall rigor of the CAR evaluation were not available. We recognize that there are substantive differences in the intent, structure, and design of the various discretionary grant programs managed by OJP and its bureaus and offices. And, as stated numerous times in our report, we acknowledge not only that impact evaluation can be an inherently difficult and challenging task but also that measuring the impact of these specific Byrne and VAWO programs can be arduous, given that they are operating in an ever changing, complex environment. We agree that not all evaluation issues that can compromise results are easily resolvable, but we firmly believe that, with more up-front attention to design and implementation issues, there is a greater likelihood that NIJ evaluations will provide meaningful results for policymakers. Absent this up-front attention, questions arise as to whether NIJ is (1) positioned to provide the definitive results expected from an impact evaluation and (2) making sound investments given the millions of dollars spent on these evaluations. The Assistant Attorney General also commented that although our report discussed “generally accepted social science standards,” it did not specify the document that articulates these standards or describe our elements of rigor. As a result, the Assistant Attorney General said, OJP had to infer that six elements had to be met to achieve what “GAO believes” is necessary to “have a rigorous impact evaluation.” Specifically, she said that she would infer that, for an impact evaluation to be rigorous would require (1) selection of homogenous programs, (2) random or stratified site sampling procedures (or selection of all sites), (3) use of comparison groups, (4) high response rates, (5) available and relevant automated data systems that will furnish complete and accurate data to evaluators in a timely manner, and (6) funding sufficient to accomplish all of the above. Furthermore, the Attorney General said that it is rare to encounter all of these conditions or be in a position to engineer all of these conditions simultaneously; and when all of these conditions are present, the evaluation would be rigorous. She also stated that it is possible to glean useful, if not conclusive, evidence of the impact of a program from an evaluation that does not rise to the standard recommended by GAO because of the unavoidable absence of “one or more elements.” We agree that our report did not specify particular documents that articulate generally accepted social science standards. However, the standards that we applied are well defined in scientific literature. All assessments of the impact evaluations we reviewed were completed by social scientists with extensive experience in evaluation research. Throughout our report, we explain our rationale and the criteria we used in measuring the methodological rigor of NIJ’s impact evaluations. Furthermore, our report does not suggest that a particular standard or set of standards is necessary to achieve rigor, nor does it suggest that other types of evaluations, such as comprehensive process evaluations, are any less useful in providing information on how a program is operating. In this context, it is important to point out that the scope of our work covered impact evaluations of Byrne and VAWO discretionary grant programs— those designed to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. We differ with the Assistant Attorney General with respect to the six elements cited as necessary elements for conducting an impact evaluation. Contrary to the Assistant Attorney General’s assertion, our report did not state that a single homogeneous program is a necessary element for conducting a rigorous impact evaluation. Rather, we pointed out that heterogeneity or program variation is a challenge that adds to the complexity of designing an evaluation. In addition, contrary to her assertion, the report did not assert that random sampling or stratification was a necessary element for conducting a rigorous evaluation; instead it stated that when random sampling is not feasible, other purposeful sampling methods can be used. With regard to comparison groups, the Assistant Attorney General’s letter asserted that GAO standards required using groups that do not receive program benefits as a basis of comparison with those that do receive such benefits. In fact, we believe that the validity of evaluation results can be enhanced through establishing and tracking comparison groups. If other ways exist to effectively isolate the impacts of a program, comparison groups may not be needed. However, we saw no evidence that other methods were effectively used in the VAWO impact evaluations we assessed. The Assistant Attorney General also suggested that we used a 75 percent or greater response rate for evaluation surveys as a standard of rigor. In fact, we did not—we simply pointed out that NIJ documents showed a 43 percent response rate on one of the STOP Grant Program evaluation surveys. This is below OMB’s threshold response rate level—the level below which OMB particularly believes nonresponse bias and statistical problems could affect surveys. Given OMB guidance, serious questions could be raised about program conclusions drawn from the results of a survey with a 43 percent response rate. In addition, the Assistant Attorney General suggested that, by GAO standards, she would have to require state, local, or tribal government officials to furnish complete and accurate data in a timely manner. In fact, our report only points out that NIJ reported that evaluators were hindered in carrying out evaluations because of the lack of automated data systems or because data were missing, lost, or unavailable—again, challenges to achieving methodologically rigorous evaluations that could produce meaningful and definitive results. Finally, the Assistant Attorney General’s letter commented that one of the elements needed to meet “all of GAO’s conditions” of methodological rigor is sufficient funding. She stated that more rigorous impact evaluations cost more than those that provide less scientific findings, and she said that OJP is examining the issue of how to finance effective impact evaluations. We did not assess whether funding is sufficient to conduct impact evaluations, but we recognize that designing effective and rigorous impact evaluations can be expensive—a condition that could affect the number of impact evaluations conducted. However, we continue to believe that with more up-front attention to the rigor of ongoing and future evaluations, NIJ can increase the likelihood of conducting impact evaluations that produce meaningful and definitive results. In addition to the above comments, the Assistant Attorney General made a number of suggestions related to topics in this report. We have included the Assistant Attorney General’s suggestions in the report, where appropriate. Also, the Assistant Attorney General provided other comments in response to which we did not make changes. See appendix III for a more detailed discussion of the Assistant Attorney General’s comments. We are sending copies of this report to the Chairman and the Ranking Minority Member of the Senate Judiciary Committee; to the Chairman and Ranking Minority Member of the House Judiciary Committee; to the Chairman and Ranking Minority Member of the Subcommittee on Crime, House Committee on the Judiciary; to the Chairman and the Ranking Minority Member of the House Committee on Education and the Workforce; to the Attorney General; to the OJP Assistant Attorney General; to the NIJ Director; to the BJA Director; to the VAWO Director; and to the Director, Office of Management and Budget. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact John F. Mortin or me at (202) 512-8777. Key contributors to this report are acknowledged in appendix IV. Evaluation findings Assessment of evaluation This evaluation has several limitations. (1) The choice of the 10 impact sites is skewed toward the National Evaluation of the Rural Domestic Violence and Child Victimization Grant Program COSMOS Corporation The Violence Against Women Office’s (VAWO) Rural Domestic Violence Program, begun in fiscal year 1996, has funded 92 grantees through September 2001. The primary purpose of the program is to enhance the safety of victims of domestic abuse, dating violence, and child abuse. The program supports projects that implement, expand, and establish cooperative efforts between law enforcement officers, prosecutors, victim advocacy groups, and others in investigating and prosecuting incidents of domestic violence, dating violence, and child abuse; provide treatment, counseling, and assistance to victims; and work with the community to develop educational and prevention strategies directed toward these issues. The impact evaluation began in July 2000, with a final report expected no earlier than April 2002. Initially, 10 grantees were selected to participate in the impact evaluation; 9 remain in the evaluation. Two criteria were used in the selection of grant participants: the “feasibility” of earlier site-visited grantees to conduct an outcome evaluation and VAWO recommendations based on knowledge of grantee program activities. Logic models were developed, as part of the case study approach, to show the logical or plausible links between a grantee’s activities and the desired outcomes. The specified outcome data were to be collected from multiple sources, using a variety of methodologies during 2- to- 3-day site visits (e.g., multiyear criminal justice, medical, and shelter statistics were to be collected from archival records; community stakeholders were to be interviewed; and grantee and victim service agency staff were to participate in focus groups). At the time of our review, this evaluation was funded at $719,949. The National Institute of Justice (NIJ) could not separate the cost of the impact evaluation from the cost of the process evaluation. Too early to assess. technically developed evaluation sites and is not representative of either all Rural Domestic Violence Program Grantees, particular types of projects, or delivery styles. (2) The lack of comparison groups will make it difficult to exclude the effect of external factors, such as victim safety and improved access to services, on perceived change. (3) Several so-called short-term outcome variables are in fact process variables (e.g., number of police officers who complete a training program on domestic violence, identification of barriers to meeting the needs of women and children). (4) It is not clear how interview and focus group participants are to be selected, (5) Statistical procedures to be used in the analyses have not been sufficiently identified. The NIJ peer review committee had concerns about whether the evaluation could demonstrate that the program was working. NIJ funded the application as a cooperative agreement because a substantial amount of agency involvement was deemed necessary to meet the objectives of the evaluation. Evaluation findings Assessment of evaluation This evaluation has several limitations: the absence of a representative sampling frame for site National Evaluation of the Arrest Policies Program Institute for Law and Justice (ILJ) The purpose of VAWO’s Arrest Policies Program is to encourage states, local governments, and Indian tribal governments to treat domestic violence as a serious violation of criminal law. The program received a 3-year authorization (fiscal years 1996 through 1998) at approximately $120 million to fund grantees under six purpose areas: implementing mandatory arrest or proarrest programs and policies in police departments, tracking domestic violence cases, centralizing and coordinating police domestic violence operations, coordinating computer tracking systems, strengthening legal advocacy services, and educating judges and others about how to handle domestic violence cases. Grantees have flexibility to work in several of these areas. At the time the NIJ evaluation grant was awarded, 130 program grantees had been funded; the program has since expanded to 190 program grantees. The impact evaluation began in August 1998, with a draft final report due in March 2002. Six grantees were chosen to participate in the impact evaluation. Each of the six sites was selected on the basis of program “stability,” not program representativeness. Within sites, both quantitative and qualitative data were to be collected and analyzed to enable better understanding of the impact of the Arrest Program on offender accountability and victim well being. This process entailed reviewing data on the criminal justice system’s response to domestic violence; tracking a random sample of 100 offender cases, except in rural areas, to determine changes in offender accountability; conducting content analyses of police incident reports to assess change in police practices and documentation; and interviewing victims or survivors at each site to obtain their perceptions of the criminal justice system’s response to domestic violence and its impact on their well-being. ILJ had planned cross-site comparisons and the collection of extensive historical and interview data to test whether competing factors could be responsible for changes in arrest statistics. At the time of our review, this evaluation was funded at $1,130,574. NIJ could not separate the cost of the impact evaluation from the cost of the process evaluation. Too early to assess. selection, the lack of comparison groups, the inability to conduct cross-site comparisons, and the lack of a sufficient number of victims in some sites to provide a perspective on the changes taking place in domestic violence criminal justice response patterns and victim well-being. In addition, there was difficulty collecting pre-grant baseline data, and the quality of the data was oftentimes inadequate, limiting the ability to measure change over time. Further, automated data systems were not available in all work areas, and data were missing, lost, or unavailable. An NIJ peer review committee also expressed some concerns about the grantee’s methodological design. Evaluation findings Assessment of evaluation Methodological design and implementation issues may cause difficulties in attributing program impact. A Impact Evaluation of STOP Grant Programs for Reducing Violence Against Indian Women The University of Arizona VAWO’s STOP (Services, Training, Officers, and Prosecutors) Grant Programs for Reducing Violence Against Indian Women Discretionary Grant Program was established under Title IV of the Violent Crime Control and Law Enforcement Act of 1994. The program’s principal purpose is to reduce violent crimes against Indian women. The program, which began in fiscal year 1995 with 14 grantees, encourages tribal governments to develop and implement culture-specific strategies for responding to violent crimes against Indian women and providing appropriate services for those who are victims of domestic abuse, sexual assault, and stalking. In this effort, the program provided funding for the services and training, and required the joint coordination, of nongovernmental service providers, law enforcement officers, and prosecutors hence the name, the STOP Grant Programs for Reducing Violence Against Indian Women. The University of Arizona evaluation began in October 1996 with an expected final report due in March 2002. The basic analytical framework of this impact evaluation involves the comparison of quantitative and qualitative pre-grant case study histories of participating tribal programs with changes taking place during the grant period. Various data collection methodologies have been adopted (at least in part, to be sensitive to the diverse Indian cultures): 30-minute telephone interviews, mail surveys, and face-to-face 2- to 3 day site visits. At the time of our review, this evaluation was funded at $468,552. NIJ could not separate the cost of the impact evaluation from the cost of the process evaluation. Too early to assess. number of methodological aspects of the study remain unclear: the site selection process for “in-depth case study evaluations;” the methodological procedures for conducting the longitudinal evaluation; the measurement, validity, and reliability of the outcome variables; the procedures for assessing impact; and the statistical tests to be used for determining significant change. Comparison groups are not included in the methodological design. In addition, only 43 percent of the grantees returned the mail survey, only 25 percent could provide the required homicide and hospitalization rates; and only 26 victims of domestic violence and assault could be interviewed (generally too few to measure statistical change). Generalization of evaluation results to the entire STOP Grant Programs for Reducing Violence Against Indian Women will be difficult, given these problems. Longitudinal Impact Evaluation of the Strategic Intervention for High Risk Youth (a.k.a. The Children at Risk Program) The Urban Institute The Children at Risk (CAR) Program, a comprehensive drug and delinquency prevention initiative funded by the Bureau of Justice Assistance (BJA), the Office of Juvenile Justice and Delinquency Prevention (OJJDP), the Center on Addiction and Substance Abuse, and four private foundations, was established to serve as an experimental demonstration program from 1992 to 1996 in five grantee cities. Low-income youths (11 to 13 years old) and their families, who lived in severely distressed neighborhoods at high-risk for drugs and crime, were targeted for intervention. Eight core service components were identified: case management, family services, education services, mentoring, after- school and summer activities, monetary and nonmonetary incentives, community policing, and criminal justice and juvenile intervention (through supervision and community service opportunities). The goals of the program were to reduce drug use among targeted families and improve the safety and overall quality of life in the community. The evaluation actually began in 1992, and the final report was submitted in May 1998. The study used both experimental and quasi-experimental evaluation designs. A total of 671 youths in target neighborhood schools were randomly assigned to either a treatment group (which received CAR services and the benefit of a safer neighborhood) or to a control group (which received only a safer neighborhood). Comparison groups (n=203 youths) were selected from similar high-risk neighborhoods by means of census tract data; comparison groups did not have access to the CAR Program. Interviews were conducted with youth participants at program entry (baseline), program completion (2 years later), and 1-year after program completion. A parent or caregiver was interviewed at program entry and completion. Records from schools, police, and courts were collected annually for each youth in the sample as a means of obtaining more objective data. The total evaluation funding was $1,034,732. Youths participating in CAR were significantly less likely than youths in the control group to have used gateway and serious drugs, to have sold drugs, or to have committed violent crimes in the year after the program ended. CAR youths were more likely than youths in the control and comparison groups to report attending drug and alcohol abuse programs. CAR youths received more positive peer support than controls, associated less frequently with delinquent peers, and were pressured less often by peers to behave in antisocial ways. CAR households used more services than control group households, but the majority of CAR households did not indicate using most of the core services available. Assessment of evaluation CAR is a methodologically rigorous evaluation in both its design and implementation. The evaluation findings demonstrate the value of the program as a crime and drug prevention initiative. NIJ has the primary role of disseminating Byrne and VAWO discretionary grant program evaluation results of evaluations managed by NIJ, according to NIJ, BJA, and VAWO officials, because NIJ is responsible for conducting these types of evaluations. NIJ is authorized to share the results of its research with federal, state, and local governments. NIJ also disseminates information on methodology designs. NIJ’s practices for disseminating program evaluation results are specified in its guidelines. According to the guidelines, once NIJ receives a final evaluation report from the evaluators and the results of peer reviews have been incorporated, NIJ grant managers are to carefully review the final product and, with their supervisor, recommend to the NIJ Director which program results to disseminate and the methods for dissemination. Before making a recommendation, grant managers and their supervisors are to consider various criteria, including policy implications, the nature of the findings and research methodology, the target audience and their needs, and the cost of various forms of dissemination. Upon receiving the recommendation, the Director of NIJ is to make final decisions about which program evaluation results to disseminate. NIJ’s Director of Planning and Management said that NIJ disseminates program evaluation results that are peer reviewed, are deemed successful, and add value to the field. Once the decision has been made to disseminate program evaluation results and methodologies with researchers and practitioners, NIJ can choose from a variety of publications, including its Research in Brief; NIJ Journal–At a Glance: Recent Research Findings; Research Review; NIJ Journal–Feature Article; and Research Report. In addition, NIJ provides research results on its Internet site and at conferences. For example, using its Research in Brief publication, NIJ disseminated impact evaluation results on the Byrne Children at Risk (CAR) program to 7,995 practitioners and researchers, including state and local government and law enforcement officials; social welfare and juvenile justice professionals; and criminal justice researchers. In addition, using the same format, NIJ stated that it distributed the results of its process evaluation of the Byrne Comprehensive Communities Program (CCP) to 41,374 various constituents, including local and state criminal and juvenile justice agency administrators, mayors and city managers, leaders of crime prevention organizations, and criminal justice researchers. NIJ and other OJP offices and bureaus also disseminated evaluation results during NIJ’s annual conference on criminal justice research and evaluation. The July 2001 conference was attended by 847 public and nonpublic officials, including criminal justice researchers and evaluation specialists from academic institutions, associations, private organizations, and government agencies; federal, state, and local law enforcement, court, and corrections officials; and officials representing various social service, public housing, school, and community organizations. In addition to NIJ’s own dissemination activities, NIJ’s Director of Planning and Management said that it allows and encourages its evaluation grantees to publish their results of NIJ-funded research via nongovernmental channels, such as in journals and through presentations at professional conferences. Although NIJ requires its grantees to provide advance notice if they are publishing their evaluation results, it does not have control over its grantees’ ability to publish these results. NIJ does, however, require a Justice disclaimer that the “findings and conclusions reported are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice.” For example, although NIJ has not yet disseminated the program evaluation results of the three ongoing VAWO impact evaluations that we reviewed, one of the evaluation grantees has already issued, on its own Internet site, 9 of 20 process evaluation reports on the Arrests Policies evaluation grant. The process evaluations were a component of the NIJ grantee’s impact evaluation of the Arrest Policies Program. Because the evaluations were not completed, NIJ required that the grantee’s publication of the process evaluations be identified as a draft report pending final NIJ review. As discussed earlier, NIJ publishes the results of its evaluations in several different publications. For example, NIJ used the Research in Brief format to disseminate evaluation results for two of the five Byrne discretionary grant programs Comprehensive Communities Program (CCP) and Children at Risk Program (CAR) that were evaluated during fiscal years 1995 through 2001. Both publications summarize information including each program’s evaluation results, methodologies used to conduct the evaluations, information about the implementation of the programs themselves, and services that the programs provided. CCP’s evaluation results were based on a process evaluation. Although a process evaluation does not assess the results of the program being evaluated, it can provide useful information that explains the extent to which a program is operating as intended. The NIJ Research in Brief on the Byrne CAR Discretionary Grant Program provides a summary of issues and findings regarding the impact evaluation. That summary included findings reported one year after the end of the program, in addition to a summary of the methodology used to conduct the evaluation, the outcomes, the lessons learned, and a major finding from the evaluation. Following are GAO’s comments on the Department of Justice’s February 13, 2002, letter. 1. We have amended the text to further clarify that BJA administers the Byrne program, just as its counterpart, VAWO, administers its programs (see page 4). However it is important to point out that regardless of the issues raised by OJP, the focus of our work was on the methodological rigor of the evaluations we reviewed, not the purpose and structure of the programs being evaluated. As discussed in our Scope and Methodology section, our work focused on program evaluation activities associated with Byrne and VAWO discretionary grant programs generally and the methodological rigor of impact evaluation studies associated with those programs in particular. To make our assessment, we relied on NIJ officials to identify which of the program evaluations of Byrne and VAWO grant programs were, in fact, impact evaluation studies. We recognize that there are substantial differences among myriad OJP programs that can make the design and implementation of impact evaluations arduous. But, that does not change the fact that impact evaluations, regardless of differences in programs, can benefit from stronger up-front attention to better ensure that they provide meaningful and definitive results. 2. We disagree with OJP’s assessment of our report’s treatment of program variation. As discussed earlier, the scope of our review assessed impact evaluation activities associated with Byrne and VAWO discretionary grant programs, not the programs themselves. We examined whether the evaluations that NIJ staff designated as impact evaluations were designed and implemented with methodological rigor. In our report we observe that variations in projects funded through VAWO programs complicate the design and implementation of impact evaluations. According to the Assistant Attorney General, this variation in projects is consistent with the intent of the programs’ authorizing legislation. We recognize that the authorizing legislation provides VAWO the flexibility in designing these programs. In fact, we point out that although such flexibility may make sense from a program perspective, project variation makes it much more difficult to design and implement a definitive impact evaluation. This poses sizable methodological problems because an aggregate analysis, such as one that might be constructed for an impact evaluation, could mask the differences in effectiveness among individual projects and therefore not result in information about which configurations of projects work and which do not. 3. We have amended the Results in Brief to clarify that peer reviews evaluated proposals. However, it is important to note that while the peer review committees may have found the two VAWO grant applications to be the most superior, this does not necessarily imply that the impact evaluations resulting from these applications were well designed and implemented. As discussed in our report, the peer review panel for each of the evaluations expressed concerns about the proposals that were submitted, including issues related to site selection and the need for additional impact measures and control variables. Our review of the documents NIJ made available to us, including evaluators’ responses to peer review comments, led to questions about whether the evaluators’ proposed methodological designs were sufficient to allow the evaluation results to be generalized and to determine whether the program was working. 4. We have amended the Background section of the report to add this information (see page 6). 5. As discussed in OJP’s comments, we discussed external factors that could account for changes that the Rural Program evaluation observed in victims’ lives and the criminal justice system. We did so not to critique or endorse activities that the program was or was not funding, but to demonstrate that external factors may influence evaluation findings. To the extent that such factors are external, the Rural Program evaluation methodology should account for their existence and attempt to establish controls to minimize their affect on results (see page 14). We were not intending to imply that alcohol is a cause for domestic violence, as suggested by the Assistant Attorney General, but we agree that it could be an exacerbating factor that contributes to violence against women. 6. As discussed earlier, we recognize that there are substantive differences in the intent, structure, and design of the various discretionary grant programs managed by OJP and its bureaus and offices. Also, as stated numerous times in our report, we acknowledge not only that impact evaluation can be an inherently difficult and challenging task but also that measuring the impact of these specific Byrne and VAWO programs can be arduous given that they are operating in an ever changing, complex environment. We agree that not all evaluation issues that can compromise results are easily resolvable, but we firmly believe that with more up- front attention to design and implementation issues, there is a greater likelihood that NIJ impact evaluations will provide meaningful results for policymakers. Regarding the representativeness of sites, NIJ documents that were provided during our review indicated that sites selected during the Rural Program evaluation were selected on the basis of feasibility, as discussed in our report— specifically, whether the site would be among those participants equipped to conduct an evaluation. In its comments, OJP stated that the 6 sites selected for the impact evaluation were chosen to maximize geographical and purpose area diversity while focusing on sites with high program priority. OJP did not provide any additional information that would further indicate that the sites were selected on a representative basis. OJP did, however, point out that the report does not address how immensely expensive the Arrest evaluation would have become if it had included all 130 sites. We did not address specific evaluation site costs because we do not believe that there is a requisite number of sites needed for any impact evaluation to be considered methodologically rigorous. Regarding OJP’s comment about the flexibility given to grantees in implementing VAWO grants, our report points out that project variation complicates evaluation design and implementation. Although flexibility may make sense from a program perspective, it makes it difficult to generalize about the impact of the entire program. 7. We used the drug court example to illustrate, based on our past work, how comparison groups can be used in evaluation to isolate and minimize external factors that could influence the study results. We did not, nor would we, suggest that any particular unit of analysis is appropriate for VAWO evaluations since the appropriate unit of analysis is dependent upon the specific circumstances of the evaluation. We were only indicating that since comparison groups were not utilized in the studies, the evaluators were not positioned to demonstrate that change took place as a result of the program. 8. We do not dispute that VAWO grant programs may provide valuable outputs over the short term. However, as we have stated previously, the focus of our review was on the methodological rigor of impact evaluations--those evaluations that are designed to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. Given the methodological issues we found, it is unclear whether NIJ will be able to discern long-term effects due to the program. 9. As stated in our report, we acknowledge not only that impact evaluation can be an inherently difficult and challenging task, but that measuring the impact of Byrne and VAWO programs can be arduous given the fact that they are operating in an ever changing, complex environment. We agree that not all evaluation issues that can compromise results are easily resolvable, but we firmly believe that, with more up-front attention to design and implementation issues, there is a greater likelihood that NIJ evaluations will provide meaningful results for policymakers. As we said before, absent this up-front attention, questions arise as to whether NIJ is (1) positioned to provide the definitive results expected from an impact evaluation and (2) making sound investments given the millions of dollars spent on these evaluations. If NIJ believes that the circumstances of a program are such that it cannot be evaluated successfully (in relation to impact) they should not proceed with an impact evaluation. 10. We have amended the footnote to state that from fiscal year 1995 through fiscal year 1999, this program was administered by VAWO. As of fiscal year 2000, responsibility for the program was shifted to OJP’s Corrections Program Office (see page 5). 11. In regard to the number of grants, we have amended the text to reflect that the information NIJ provided during our review is the number of grantees, not the number of grants (see pages 25 and 26). We have also amended our report to reflect some of the information provided in VAWO’s description of the Rural Domestic Violence Program to further capture the essence of the program (see page 25). 12. We disagree. We believe that separating the cost of the impact and process evaluations is more than a matter of bookkeeping. Even though the work done during the process phase of an evaluation may have implications for the impact evaluation phase of an evaluation, it would seem that, given the complexity of impact evaluations, OJP and NIJ would want to have in place appropriate controls to provide reasonable assurance that the evaluations are being effectively and efficiently carried out at each phase of the evaluation. Tracking the cost of these evaluation components would also help reduce the risk that OJP’s, NIJ’s, and, ultimately, the taxpayer’s investment in these impact evaluations is not wasted. 13. As discussed earlier, we recognize that there are substantive differences in the intent, structure, and design of the various discretionary grant programs managed by OJP and its bureaus and offices, including those managed by VAWO. Our report focuses on the rigor of impact evaluations of grant programs administered by VAWO and not on the program’s implementing legislations. Although flexibility may make sense from a program perspective, it makes it difficult to develop a well designed and methodologically rigorous evaluation that produces generalizeable results about the impact of the entire program. 14. Our report does not suggest that other types of evaluations, such as comprehensive process evaluations, are any less useful in providing information about how well a program is operating. The scope of our review covered impact evaluations of Byrne and VAWO discretionary grant programs—those designed to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program. In addition to the above, Wendy C. Simkalo, Jared A. Hermalin, Chan My J. Battcher, Judy K. Pagano, Grace A. Coleman, and Ann H. Finley made key contributions to this report. | Discretionary grants awarded under the Bureau of Justice Assistance's (BJA) Byrne Program help state and local governments make communities safe and improve criminal justice. Discretionary grants awarded under BJA's Violence Against Women Office (VAWO) programs are aimed at improving criminal justice system responses to domestic violence, sexual assault, and stalking. The National Institute of Justice (NIJ) awarded $6 million for five Byrne Program and five VAWO discretionary grant program evaluations between 1995 and 2001. Of the 10 programs evaluated, all five VAWO evaluations were designed to be both process and impact evaluations of the VAWO programs. Only one of the five Byrne evaluations was designed as an impact evaluation and the other four evaluations were process evaluations. GAO's in-depth review of the four impact evaluations since fiscal year 1995 showed that only one of these--the evaluation of the Byrne Children at Risk Program--was methodologically sound. The other three evaluations, all of which examined VAWO programs, had methodological problems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To address your request, we matched selected demographic data from the U.S. Census Bureau with voting equipment, voter turnout, and presidential vote data obtained from Election Data Services (EDS) and the Internet web sites of state election officials. We statistically analyzed county-level data to investigate relationships among counties’ demographic characteristics, their voting equipment, and their percentages of uncounted presidential votes. We also statistically controlled for the state in which counties are located. We included data from 43 states and the District of Columbia, representing 78 percent of the counties in the United States. Our results should not be generalized beyond this set of locations. The county demographic characteristics included from the 2000 Census were population size, racial composition (percent of African American and Hispanic residents in the county), and age (percent of 18-24 year olds and residents over 65). We included estimates of median income and percent of residents living below the poverty level from a 1997 Census model, and education data (percent of high school graduates in a county) from the 1990 Census. We measured uncounted presidential votes by subtracting the number of votes for President from the number of total ballots cast. Both numbers were included in EDS’ data along with voting equipment information for each county. We supplemented the analysis using GAO survey data from a representative sample of county election officials to obtain further information on the use of error correction in conjunction with the various types of voting equipment. Because of the unavailability of comprehensive data, we could not determine why votes for President were not counted; could not distinguish between ballots cast at the polling place on election day and those cast by absentee ballot or through early voting; and could not assess the reliability of different models of the same type of voting equipment. Additional information on our methodology and its limitations is provided in appendix I. We conducted our work from March through October 2001 in accordance with generally accepted government auditing standards. Each state and the District of Columbia play a role in elections by establishing election laws, policies, and procedures. In most states, counties are responsible for conducting elections, including selecting countywide voting equipment, counting ballots, and reporting elections results. In separate reports, we provide more in-depth information on election issues relating to people, processes, and technology at the county and state levels. The equipment on which votes were cast and counted in the November 2000 election can be placed into five categories: paper ballots, lever machines, punch cards, optical scan, and electronic. Three of these five types of equipment—lever, optical scan, and electronic—have some capability or can be used to prevent or allow for the correction of voting errors. Paper ballots. Paper ballots list the names of the candidates and the issues to be voted on. Voters generally complete their ballots in the privacy of a voting booth, recording their choices by placing marks in boxes corresponding to the candidates’ names and the issues. After making their choices, voters drop the ballots into sealed ballot boxes. Election officials gather the sealed boxes and transfer them to a central location, where the ballots are manually counted and tabulated. Lever machines. Lever machine “ballots” consist of a rectangular array of levers. Printed strips listing the candidates and issues are placed next to each lever. Voters cast their vote by pulling down the levers next to the candidates or issues of their choice. After voting, the voter moves a handle, which automatically records the vote and resets the levers. Votes are tallied by mechanical counters, which are attached to each lever. At the close of the election, election officials tally the votes by reading the counting mechanism totals on each lever voting machine. A feature inherent to lever machines is that they prevent voters from overvoting (i.e., voting more than once for the same office, unless the ballot explicitly allows for more than one choice to be made). Overvoting is prevented by the interlocking of the appropriate mechanical levers in the machine. Punch cards. Punch card voting equipment generally consists of a ballot, a vote recording device, a privacy booth, and a computerized tabulation device. Votes are cast by inserting the ballot into the vote recording device and punching a hole through the ballot such that the hole corresponds to the voter’s ballot choice. Votes cast on punch card equipment are machine readable. Votes are tabulated using vote tabulation machines, and software is used to program each vote tabulation machine to correctly assign each vote read into the computer to the proper race and candidate or issue. The two basic types of punch card devices are Votomatic and Datavote. Optical scan. An optical scan voting system is comprised of computer- readable ballots, appropriate marking devices, privacy booths, and a computerized tabulation machine. The ballot lists the names of the candidates and the issues. Voters record their choices using an appropriate writing instrument to fill in boxes or ovals, or to complete an arrow next to the candidate’s name or the issue. Like punch card software, the software for optical scan equipment is used to program the tabulation equipment to correctly assign each vote read into the computer to the proper race and candidate or issue. Optical scan equipment based in precincts can be programmed to detect and reject both overvoting and undervoting (i.e., not registering a vote for every race and/or issue on the ballot). Using such error correction technology could allow voters to fix their mistakes before leaving the polling place. If ballots are tabulated centrally, voters do not have the opportunity to correct mistakes that may have been made. Electronic. Electronic equipment (also called Direct Recording Electronic or DRE) comes in two basic types, pushbutton or touchscreen, with the pushbutton being the older and more widely used of the two. For pushbuttons, voters press a button next to the name of the candidate or the issue, which then lights up to indicate the selection. Similarly, voters using touchscreens make their selections by touching the screen next to the candidate or issue, which is then highlighted. When voters are finished making their selections, they cast their votes by pressing a final “vote” button or screen. Because all electronic equipment is programmable, it does not allow overvotes. In addition, voters can change their selections before hitting the final button to cast their votes. There have been several broad-based studies that have examined relationships among voter demographics, voting equipment, and/or uncounted votes. These studies, whose methods and findings we did not independently verify, included the following. A recent research study estimated that about 1.5 million voters thought they had voted for President but did not have their votes for President counted in the 2000 election. Faulty voting equipment and confusing ballots were among the stated reasons for the ballots being unmarked, spoiled, or too ambiguous to count. The study reported that punch card and electronic voting equipment were associated with uncounted votes for President exceeding 2 percent of all ballots cast. (CalTech/MIT, July 2001.) Another recent research study reported that, despite the perception that minorities and poor people were disproportionately more likely to vote on antiquated voting machinery and therefore have their ballots invalidated, the data did not support this contention. The study found that in the majority of states, whites and non-poor voters were more likely than African Americans and poor voters to reside in counties that used punch card equipment, based on 1998 voter equipment data. (Knack & Kropf, Jan. 2001.) A study of invalidated ballots in the 1996 presidential election found that counties with more African Americans and Hispanics were more likely to have higher rates of invalidated ballots, particularly in counties using punch card machines, optical scanners with centralized (as opposed to precinct-based) counting, and hand-counted paper ballots. When counties used equipment that can be programmed to prevent overvoting (i.e., lever technology, electronic voting technology, and precinct-count optical scan systems), racial differences in the rate of invalidated votes disappeared. (Knack & Kropf, May 2001.) A study of the 2000 presidential election found that the percentage of uncounted votes in 20 congressional districts with low-income/high- minority populations were higher, regardless of the type of voting equipment used, than in 20 congressional districts with high-income/low- minority populations. In both types of districts, the percent of uncounted votes was highest when punch card equipment was used. (House Committee on Government Reform, Minority Staff, Special Investigations Division, July 2001) In the November 2000 presidential election, there were over 85 million votes cast in the 2,455 counties in our analysis and, of those, 1.6 million votes for President were not counted. The percentage of uncounted votes ranged from 0 percent to 23 percent, with an average of 2.3 percent. Only 12 counties had percentages of uncounted votes that exceeded 10 percent. Of the 2,455 counties, 284 (or 12 percent) used electronic voting equipment, 381 (16 percent) used lever equipment, 1,095 (45 percent) used optical scan equipment, 213 (9 percent) used paper ballots, and 482 (20 percent) used punch card equipment. Furthermore, Table 1 shows that while 35 percent of the ballots cast came from counties using punch card equipment, 49 percent of the uncounted presidential votes were cast on punch card equipment. Counties with different voting equipment differed demographically. (See table 2.) Counties that used punch cards, for example, had larger populations; higher median incomes; and smaller percentages of residents over 65 years of age and persons living below the poverty level than counties using other types of voting equipment. Our analysis did not show that minorities, or persons with less education or income, were more likely than others to be found in counties that used punch card voting equipment, the equipment associated with higher percentages of uncounted presidential votes. As the final row of table 2 shows, before controlling for demographic characteristics or state differences, the average percent of uncounted presidential votes was higher in counties that used punch cards (2.9 percent) than in other counties (2.1 percent to 2.3 percent). Overall, while we found that counties’ percentages of uncounted presidential votes were related to their voting equipment and demographic characteristics, these factors accounted for less of the variation in uncounted votes across counties than did the state in which the county is located. To determine how the percentages of uncounted votes across the counties for which we had data were affected by voting equipment, demographic characteristics, and the state in which counties are located, we used robust regression models that adjusted for the clustering (i.e., the lack of independence) of observations within states. Our statistical model included type of voting equipment, county demographic variables, and a set of variables to control for differences across states in which counties are located. (See app. I, table 3 for a more detailed discussion of all models and effects.) Our statistical model indicated that there were no significant differences in uncounted presidential votes among counties that use electronic, paper, and optical scan voting equipment. Counties with punch cards had percentages of uncounted presidential votes that were roughly 0.6 percentage points higher than those counties, and counties with lever machines had percentages of uncounted presidential votes that were 0.7 percentage points lower than those counties. Given that the average of the uncounted presidential votes across all counties was roughly 2 percent, these represent sizable, as well as statistically significant, differences. When the same statistical model was run for the subset of 404 counties that we surveyed, we found an additional equipment effect. The survey asked county election officials if they used equipment that either prevented errors or identified errors for voters so the ballot might be corrected. Since both electronic and lever equipment prevent overvotes, almost all of the counties using those types of equipment reported using error correction. In addition, almost all of the counties using punch card equipment and paper ballots reported not having or using error correction capabilities. Therefore, responses to the survey allowed us to distinguish between counties with optical scan equipment that used error correction and those that did not use it. Doing so resulted in significant differences between types of equipment. Counties using punch cards had uncounted presidential votes that were 1.1 percentage points higher than counties using error-corrected optical scan equipment. If we apply these results to the larger set of 2,455 counties, an estimated 300,000 additional votes may have been counted if counties that used punch card equipment had, instead, used precinct-based optical scan equipment with error correction. After we statistically controlled for the effects of state differences and voting equipment, uncounted presidential votes in our dataset of 2,455 counties were significantly higher in counties with higher percentages of African Americans and Hispanics. Each percentage point increase in a county’s population of African Americans was associated with a 0.02 percentage point increase in the county’s uncounted presidential votes. Each percentage point increase in a county’s population of Hispanics was associated with a 0.01 increase in the county’s uncounted presidential votes. This means, for example, that we would expect that a county where African Americans made up 35 percent of the population would have had uncounted presidential votes that were 0.6 percentage points higher than a county where African Americans made up 5 percent of the population. After we statistically controlled for the effects of state differences and voting equipment, uncounted presidential votes in our dataset of 2,455 counties were significantly lower in counties with higher percentages of high school graduates and 18- to 24 year-olds. Each percentage point increase in a county’s population of high school graduates was associated with a 0.06 percentage point decrease in the county’s uncounted presidential votes. Likewise, each percentage point increase in a county’s population of 18- to 24-year-olds was associated with a 0.03 percentage point decrease in the county’s uncounted presidential votes. This means, for example, that we would expect that a county where high school graduates made up 50 percent of the population would have had uncounted presidential votes that were 1.8 percentage points lower than a county where high school graduates made up 20 percent of the population. We next determined the incremental effects of voting equipment, county demographics, and state differences on counties’ percentage of uncounted presidential votes. When we just included type of equipment in the statistical model, we found that equipment alone explained 2 percent of the variation in uncounted presidential votes across counties. When we added demographic variables to that model, the county demographics explained an additional 16 percent of the variation. Next, we included a set of variables to statistically control for differences across the states in which counties are located. This made it possible to account for an additional 26 percent of the variation in uncounted presidential votes. A supplemental analysis of a subset of 404 counties that we surveyed showed that including a county’s use of error correction with optical scan equipment would explain an additional 4 percent of the variation in uncounted votes across counties. Differences across states were of considerable importance in determining the prevalence of uncounted presidential votes and accounted for more of the variability (26 percent) in uncounted presidential votes across counties than demographic characteristics and type of voting equipment used combined. The following factors, for which we had no data because they have not been measured in a comprehensive, systematic way, are among those that may have contributed to differences among states: (1) voter education efforts, such as making sample ballots available prior to election day; (2) the use of straight party ballots that enable voters to make one entry to cast votes for all offices on the ballot; (3) the number of candidates on the ballot (including presidential, gubernatorial, or congressional candidates); (4) the number of provisional ballots cast, and percentage of provisional ballots that were not counted; and (5) the extent to which absentee and/or early voting occurred and if such ballots were counted using a different voting equipment than ballots cast on election day. The remaining 52 percent of the variation was due to unknown factors for which we had no data, such as whether a county switched to a new type of voting equipment or the number of inexperienced voters in a county. Like all four of the studies cited earlier in this report, we found that punch card equipment was associated with higher percentages of uncounted votes in counties, although our findings did not indicate, as did those of CalTech/MIT, that electronic voting equipment was similarly problematic. We also found, like Knack and Kropf, that minorities and persons with lower income were not more likely than others to reside in counties that used punch cards, and that counties with higher percentages of African Americans had higher percentages of uncounted presidential votes. We did not find, however, that the racial difference “disappears” in counties with certain voting equipment. Also, while there were differences between our study and that of the Special Investigation (e.g., our analytic methods did not involve making the same specific comparisons, and we analyzed counties while they analyzed congressional districts, our results do indicate, like theirs, that regardless of voting equipment, percentages of uncounted presidential votes were higher in high minority areas than in other areas. To the extent that our results are not consistent with the findings of others, factors that may account for these differences include the variables included in the analyses, the number of counties included in the dataset, and the age of the data used by the different studies. This report is one of several GAO studies addressing election issues. Our other reports discuss in greater depth election issues such as the scope of congressional authority in election administration, voter registration, absentee and early voting, voting assistance for military and overseas voters, election day administration, voting accessibility for voters with disabilities, vote counts and certification, Internet voting, and voting equipment standards. We are sending copies of this report to the Chairman of your Committee and to other congressional committees. Staff members who contributed to this review are acknowledged in appendix II. If you or your staff have any questions about this report, please contact me on (202) 512-8777. This appendix provides information on our analyses of uncounted presidential votes in the November 2000 general election and the extent to which these uncounted votes were affected by counties’ voting equipment, demographic characteristics, and state differences. It also discusses a separate analysis of a subset of counties in which we explored the potential of using optical scan equipment with error correction capability to reduce uncounted votes. Our database consisted of demographic, voting equipment and election results data for each of 2,455 counties in 43 states and the District of Columbia. The database included 78 percent of the nation’s 3,141 counties at the time of the 2000 presidential election. To our knowledge, these data were the most recent, comprehensive, and valid data available to address the research questions specified for our study. Notwithstanding the strengths of our database, the precision of our analytic results and our ability to explain why they occurred are limited by a number of factors, including missing data, omitted variables, and measurement error. For several reasons, we did not include a number of states and counties in our database. Specifically, we excluded (1) all counties in 6 states (Arkansas, Maine, Mississippi, Missouri, Pennsylvania, and Wisconsin), 107 counties in Texas, 1 county in Alabama and 1 county in Oklahoma because these counties did not report the necessary data to calculate uncounted votes; (2) all voting jurisdictions in Alaska because they did not correspond directly to election districts; (3) counties that used a mix of voting equipment; (4) counties in which the reported numbers of votes cast for President exceeded the number of persons who turned out to vote; and (5) 1 county in which it appeared that only half the persons who turned out to vote cast a vote for President. Our results should be interpreted with caution for the following reasons: (1) The available data did not distinguish between votes cast at the polling place on election day and those cast by absentee ballot or through early voting. Because some locations used different equipment for absentee and/or early voting, we could not assess the impact of such differences on our results. (2) We did not have information on the particular model of voting equipment used, so uncounted presidential votes, even within a single county, may have been affected by differences in the reliability of different models of the same equipment. (3) We used aggregate county- level demographic data as a proxy for the characteristics of voters because we did not have data on individual voters. (4) We could not determine why votes for President were not counted. For example, we could not discern if uncounted presidential votes were due to voter error, equipment failure, errors on the part of election officials, or intentional nonvoting for the office of President. (5) In the absence of more current data, we analyzed 1990 Census data on education, which may have had different relationships with other variables in 2000 than it did in 1990. The extent to which such relationships may have changed is unknown. (6) Because our data on income and poverty were estimates derived from statistical models, they contained an unknown amount of measurement error that could not be accounted for in our statistical models. Our analyses included, along with descriptive statistics, analysis of variance methods and robust regression models that account for the clustering. To determine how the percentage of uncounted presidential votes was affected by the voting equipment employed in and the demographic characteristics of the counties for which we had data, we used a series of four robust regression models that adjusted for the clustering (i.e., the lack of independence) of observations within states. Model 1 in table 3 indicates that when demographic and other differences across counties are ignored, the average percentage of uncounted votes was significantly higher in counties that used punch card equipment than in counties that used optical scan equipment (which is the deleted referent category). Counties that used electronic, paper, or lever equipment, on the other hand, were not significantly different from those that used optical scan equipment. The R-squared value (i.e., the value representing the proportion of variation that the statistical model explained) for Model 1 indicates that differences in voting equipment accounted for only 2 percent of the variation in the percentage of uncounted votes across counties. This effect of voting equipment on uncounted votes may be due to various differences between types of equipment such as the design of the equipment by the manufacturer, the operation of the equipment by voters, or the processes that election officials used to prepare and operate the equipment. Model 4 3.39 0.63** -0.32 -0.72** -0.35 -0.13 0.02** 0.01* -0.06** -0.03** 0.04 0.02 0.00 0.44 * Statistically significant at the 0.05 confidence level. ** Statistically significant at the 0.01 confidence level. When assessing the effects of demographic characteristics while ignoring differences in voting equipment across counties, as in Model 2, we found that the percentage of uncounted presidential votes was significantly higher in counties with smaller populations and in counties with higher percentages of African Americans. Other factors were not statistically significant. The demographic measures we considered, taken together, accounted for about 12 percent of the variability in the percentage of uncounted votes across counties. When we considered voting equipment and demographic factors jointly in Model 3, (1) we were able to account for 18 percent of the variation across counties in the percentage of uncounted presidential votes, and (2) punch card equipment, population size, and percent African American remained statistically significant. That is, regardless of county demographics, counties that used punch card equipment had higher percentages of uncounted presidential votes. Additionally, regardless of voting equipment, counties with higher percentages of African Americans had higher percentages of uncounted votes, and counties with larger populations had lower percentages of uncounted presidential votes. In our final model, Model 4, we estimated these same effects after allowing not only for clustering but also for differences across counties that were due to the unmeasured effects of the states they are located in. Using dummy variables (the coefficients for which are deleted from table 3) to allow these effects made it possible to account for about 44 percent of the variation in uncounted presidential votes. Moreover, Model 4 indicates that once this full set of differences was accounted for, there were no differences in uncounted presidential votes among counties that use electronic, paper, or optical scan voting equipment. Counties with punch cards had roughly 0.6 percentage points higher percentages of uncounted presidential votes than those counties, and counties with lever equipment had 0.7 percentage points lower percentages of uncounted presidential votes than those counties. Given that the average uncounted votes across all counties was roughly 2 percent, these represent sizable, as well as statistically significant, differences. The only demographic variables that were associated with significantly higher percentages of uncounted presidential votes when the state and voting equipment effects were controlled, were higher percentages of residents who were African American and Hispanic. The demographic variables that were associated with significantly lower percentages of uncounted presidential votes when the state and voting equipment effects were controlled included higher percentages of high school graduates and 18 to 24 year-olds in the county. Characteristics of voters did not appear to interact with voting equipment to affect the percentage of uncounted votes, although our aggregated data were not well suited to addressing this issue. Models that included interactions between voting equipment and demographic characteristics (not shown) accounted for only about 1 percent of the variation in uncounted votes across counties. An additional key finding of our study was that differences across states were of considerable importance in determining the prevalence of uncounted presidential votes and accounted for more of the variability across counties in uncounted presidential votes (26 percent) than demographic characteristics (16 percent) and type of voting equipment (2 percent) combined. The following factors for which we had no data are among those that may have contributed to differences among states: 1. voter education efforts, such as making sample ballots available prior 2. the use of straight party ballots that enable voters to make one entry to cast votes for all offices on the ballot; 3. the number of candidates on the ballot (including presidential, gubernatorial, or congressional candidates); 4. the number of provisional ballots cast, and percentage of provisional ballots that were not counted; and 5. the extent to which absentee and/or early voting occurred and if such ballots were counted using a different voting equipment than ballots cast on election day. When we ran Model 4 for a subset of 404 counties that GAO surveyed, we found an additional equipment effect. This survey asked county election officials if they used equipment that either prevents errors or identifies errors for voters so the ballot might be corrected. Since both electronic and lever equipment prevent “overvotes,” almost all of the counties using those types of equipment reported using error correction. In addition, almost all of the counties using punchcard equipment and paper ballots reported not having or using error correction capabilities. Therefore, responses to the survey allowed us to distinguish between counties with optical scan equipment that used error correction and those that did not use it. Doing so resulted in significant differences between types of equipment. Counties using punch cards had significantly higher percentages of uncounted presidential votes than counties using error corrected optical scan equipment by 1.1 percentage points. If the relationship that we found in these 404 counties holds true for the larger set of 2,455 counties, an estimated 300,000 additional votes may have been counted if counties that used punch card equipment had, instead, used precinct-based optical scan equipment with error correction. In addition to the above, Wendy Ahmed, Douglas Sloane, David Alexander, Amy Lyon, and Tanya Cruz made key contributions to this report. | Following the 2000 presidential election, concerns were raised about the election process, including the ability of some voting equipment to render a complete and accurate vote count. Furthermore, minorities and disadvantaged voters were seen as more likely to have their votes not counted because they may have used less reliable voting equipment than affluent white voters. GAO found that although the state in which counties are located had more of an effect on the number of uncounted presidential votes than did counties' demographic characteristics or voting equipment, there were statistically significant effects on uncounted presidential votes. State differences accounted for 26 percent of the total variation in uncounted presidential votes across counties. State differences may have included such factors as statewide voter education efforts, state standards for determining what is a valid vote, the use of straight party ballots, the number of candidates on the ballot, the use of provisional ballots, and the extent to which absentee or early voting occurred. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Long-term care, which may include care provided in nursing homes, assisted-living facilities, or a person’s home, can be expensive. In 2005, the average cost of a year in a nursing home was more than $70,000, and in 1999, according to the most recent data available, the average length of stay was between 2 and 3 years. Long-term care insurance helps individuals pay for costs associated with long-term care services. Yet relatively few individuals have obtained coverage. As of 2002, about 9 million people nationwide had obtained long-term care insurance. To help federal employees, retirees, and others obtain coverage, the federal government began offering long-term care insurance in 2002. Long-term care insurance helps pay for the costs associated with long- term care services. People can purchase long-term care insurance directly from carriers that sell products in the individual market, or they can enroll in plans offered by employers or other groups. For a specified premium that is designed—but not guaranteed—to remain level over time, the carrier agrees to provide covered benefits under an insurance contract. Long-term care insurance premiums are affected by many factors, including the benefits offered and the age and health status of the applicant. Carriers review the health status of the applicant during the underwriting process. Carrier assumptions about interest rates, mortality rates, morbidity rates, and lapse rates—the number of people expected to drop their policies over time—also affect premium rates. Carriers set premium rates to cover the anticipated cost of enrollee benefits (which means paying approved claims), administrative costs (which includes marketing costs, claims handling, overhead, and taxes), and profits. Few claims are expected to be submitted during the early years of an enrollee’s long-term care insurance policy. As a result of underwriting, it is unlikely that many people could meet the eligibility requirements to buy a policy yet submit an approved claim within 3 years. Industry experts suggest that the rate of claim submissions begins to increase after about 3 to 7 years. Claims experience is one of many factors—such as interest rates and lapse rates—that affect the long-term financial outlook of a long-term care program. While having a lower-than-expected claims experience is a positive financial indicator, if the claims experience is significantly lower than expected over the long term, then it is possible that the premiums are too high. On the other hand, in accordance with the National Association of Insurance Commissioners (NAIC) premium-setting guidelines, it may be appropriate to project the claims experience assuming moderately adverse results to protect against the need to raise premiums. Insurance carriers’ long-term care insurance profits—defined as the excess of revenues over expenses—are affected by many factors, including the amount of risk the insurance carrier assumes. In general, the more risk a carrier assumes, the greater the carrier’s expected profits. Over time, carriers’ ability to meet or exceed their initial projections regarding interest rates, mortality rates, morbidity rates, and lapse rates, as well as their ability to contain costs, ultimately affects their profits. Carriers are also subject to state requirements, which may affect their ability to realize profits. Long-term care insurance is sold in two primary markets—the individual and group markets. Of the nearly 9 million policies sold as of 2002, the most recent year for which data were available, about 80 percent were sold through the individual market, and the remaining 20 percent were sold through the group market. Sales in the group market are growing faster than sales in the individual market. In March 2006, 13 percent of full-time employees in private industry had access to employer-sponsored long-term care insurance benefits; 20 percent of employees of establishments with 100 or more employees had access to this benefit. The individual market includes plans sold by insurance carriers to individuals, usually through agents or brokers. Individuals may choose benefits from a range of options offered by the carriers, including the duration and amount of daily benefit payments. Those who purchase coverage through the individual market typically pay the full premium. The carrier generally owns program assets and bears the risk of insuring enrollees for the terms of enrollees’ policies. The group market includes long-term care insurance plans offered to individuals through employers and other groups, such as professional associations. In this market, the groups usually design the benefits, and enrollees are often given some benefit options from which to choose, including the duration and amount of daily benefit payments. However, benefit options offered in the group market tend to be fewer than those offered in the individual market. Individuals who purchase long-term care insurance in the group market typically pay the full premium, similar to those who purchase coverage in the individual market. Employers and other groups typically contract with insurance carriers to provide long-term care insurance to qualified individuals. These contracts may be time-limited, lasting, for example, 3 to 5 years. Under these contracts, carriers are usually required to bear the risk of insuring enrollees for the terms of enrollees’ policies; the term of enrollees’ policies may be independent from, and therefore longer than, the length of an employer’s contract with a carrier. These carriers also generally own all program assets. As a result, if a carrier’s contract with an employer was not renewed, the carrier would usually be required by its contract to continue insuring those individuals for whom it issued policies. Several large carriers dominated the long-term care insurance coverage sold in the individual and group markets, as of December 31, 2005. While the long-term care insurance industry experienced 18 percent annual growth in the number of policies sold from 1987 through 2002, the industry has experienced a downturn in more recent years, according to industry experts. Specifically, carriers faced several challenges, including higher-than-expected administrative expenses relative to premiums; lower- than-expected lapse rates, which increased the number of people likely to submit claims; low interest rates, which reduced the actual return on investments below what had been assumed; and new state regulations that limited direct marketing by telephone. As a result, beginning in 2003, for example, many carriers in the individual market raised premiums, left the marketplace, or consolidated to form larger companies. In addition, many carriers have revised the assumptions used in setting premiums, taking a more conservative approach that has led to higher premiums, while state regulators have increased their oversight of the industry. The federal government began offering a group long-term care insurance program in 2002 whereby certain eligible individuals affiliated with more than 125 federal agencies may apply for coverage. Individuals eligible for the Federal Long Term Care Insurance Program include federal and Postal Service employees and retirees; active and retired members of the uniformed services; qualified relatives of these individuals; and certain others. Almost 19 million people were estimated to be eligible for coverage as of October 15, 2001. With more than 214,000 current enrollees as of September 2006, the federal program is the largest employer- sponsored group program in the nation. When the Federal Long Term Care Insurance Program began, eligible individuals could apply for enrollment during two specified periods: an early enrollment period held from March 25, 2002, through May 15, 2002, and an open enrollment period held from July 1, 2002, through December 31, 2002. Following the open enrollment period, eligible individuals could apply for coverage at any time. As is typical for other plans sold in the group market, enrollees pay the entire cost of their premium. As we noted in our March 2006 report, the federal program offered benefits similar to those of other long term- care insurance products, usually at lower premiums for comparable benefits, and the federal program’s early enrollment and claims were lower than initially expected. OPM oversees the federal program, and Partners administers the program in accordance with the requirements of a 7-year contract between OPM and Partners. The contract, signed December 18, 2001, defines key administrative requirements, including OPM’s oversight of the program and how payments for the federal program’s expenses, as well as payments that are earmarked as profits, are determined. Unlike other contracts between employers and carriers, the federal program’s contract includes requirements for the management of federal program assets— that is, the funds collected as premiums and used to pay claims—because the federal program does not give Partners ownership of federal program assets. By statute, OPM’s contract with Partners is for 7 years and is not automatically renewable. At the end of the 7-year term, OPM can either renegotiate the contract with Partners, or allow the contract to terminate and select a new carrier. If a new carrier is selected, Partners must transfer all federal program enrollees and assets, including any positive or negative returns related to the experience of the program, to the federal program’s next carrier. However, if OPM does not contract with another carrier, Partners would continue insuring the individuals who enrolled in the federal program through Partners. In this case, the federal program’s assets would remain available to Partners to pay for claims and expenses. The federal program has a unique profit structure that is explicitly defined in the contract between OPM and Partners. This profit structure consists of three distinct annual payments to Partners to compensate Partners for the risks it assumes under the program’s 7-year contract. Of these payments, two are based on a percentage of the premiums collected during the year and one is based on the average annual assets of the federal program. (See table 1.) These three payments are allowed only if premiums are sufficient to cover the federal program’s current claims and expenses. In contrast to the federal program, profits realized by carriers offering other long-term care insurance plans generally are not based on explicit profit structures. Instead, under the terms of their contracts, carriers assume the risk of insuring enrollees for the terms of enrollees’ policies and own program assets—and are thus able to realize profits or losses according to the experience of the programs they insure. The federal program guarantees one annual premium-based payment to Partners. This payment equals 3.5 percent of the premiums collected in a year. For other long-term care insurance plans offered in the group and individual markets, carriers’ profits were generally not guaranteed, according to carrier officials and industry experts. Similar to the federal program, one source of carriers’ revenue is enrollee premiums, which include an amount for anticipated profits. However, carriers may realize profits or losses according to the experience of the programs they insure, subject to applicable state regulations. The federal program links the second annual premium-based payment to OPM’s evaluation of Partners’ performance. This payment can equal up to 3 percent of the premiums collected in a year. Under an agreement which amended the contract between OPM and Partners and became effective beginning fiscal year 2006, OPM evaluates Partners’ performance each year on 21 short-term performance measures across 4 categories: administrative expense savings, customer service, enrollment experience, and responsiveness to OPM. Each performance measure has a corresponding payment equivalent to a percentage of the total performance-based payment of 3 percent of premiums. If Partners’ performance does not meet stated expectations in a measure, the payment corresponding to that measure is placed into a retained profit account. In addition, every 3 years, OPM evaluates Partners’ performance in two long- term performance measures: return on investment and claims experience. If Partners meets expectations in one measure and funds are present in the retained profit account, one-half of the amount present in the retained profit account is paid to Partners. If Partners meets expectations in both measures and funds are present in the retained profit account, the total amount present in the retained profit account is paid to Partners. As of September 2006, Partners has been awarded the full performance-based payment each year since the federal program began. Appendix I provides a complete list of the performance measures used for the federal program. Unlike the federal program, most other sponsors of plans offered in the group market usually did not link carrier profits to performance evaluations, according to carrier officials and industry experts. Carrier officials estimated that about 10 percent to 20 percent of employers required their long-term care insurance carrier to relinquish a certain percentage (for example, 2 percent to 4 percent) of premiums if their performance did not meet agreed-upon expectations. However, employers may require carriers to guarantee a certain level of performance in their contracts to ensure that enrollees are provided with standard levels of service, according to state officials. These performance measures and guarantees may include those related to the timeliness of underwriting decisions and call center performance. The federal program used all of the performance measures that industry experts and carrier officials cited as those commonly used throughout in the group market, in addition to other measures such as administrative expense savings and claims experience. The federal program’s third payment is guaranteed and is based on the average annual assets of the program. This annual payment—0.3 percent of the average annual assets of the federal program—is defined in the contract between OPM and Partners as a profit payment. The federal program developed this payment to recognize that insurers in general are required to hold risk-based capital. Risk-based capital is the capital that an insurance carrier is required to hold in reserve, separate from any other funds used to back insurance liabilities or other lines of business, to protect the carrier from insolvency. OPM does not require Partners to use the third payment to fund risk-based capital and both OPM and Partners consider this payment another form of profit for administering the program. Similar to the federal program, carriers use enrollee premium funds to fund risk-based capital requirements, according to OPM officials. However, risk-based capital may be considered an expense to carriers. The federal program used marketing efforts that were generally similar to those used for other plans sold in the group market. For example, the types of marketing efforts used for the federal program and other plans offered in the group market, according to our review of federal program documents and the carrier and state officials we spoke with, included mailing information directly to the homes of eligible individuals, sending e-mails to eligible employees at work, posting information on a Web site, hosting employee and retiree seminars, and working with affinity groups whose membership consists of eligible individuals. Of these efforts, carrier officials we spoke with told us that direct mail, which may include a personalized letter, is a critical marketing effort for long-term care insurance plans. One carrier official also explained that direct mail was so critical to the carrier’s marketing strategy that it generally would not work with employers who neither provide the home addresses of employees nor assist in mailing materials to employees. The federal program faced a significant challenge in sending information directly to eligible individuals, particularly through direct mail. Specifically, the federal program was initially unable to mail information directly to the homes of about 60 percent of the program’s core group of eligible individuals that Partners deemed most likely to enroll in the federal program—including nearly all active federal civilian employees— because neither OPM nor Partners had the home addresses of these individuals. OPM officials told us that a centralized database of this information does not exist. According to OPM officials, OPM did not request federal employees’ home address information from other federal agencies because they felt it would be too burdensome to comply with certain Privacy Act requirements and gather accurate information from each of the agencies in a timely manner. Despite this challenge, the federal program initially mailed information directly to the homes of those for whom it had addresses, which included about 40 percent of the program’s core group of eligible individuals, as well as other non-core groups such as retired military personnel and annuitants under the Civil Service Retirement System and the Federal Employees Retirement System, according to our analysis of Partners’ data. As of October 2006, Partners officials noted that direct mail efforts were still limited because of the federal program’s inability to mail information directly to the homes of most active federal civilian employees. Before signing its contract with OPM, Partners was aware of the federal program’s limitations regarding direct mail. As a result of the federal program’s limited ability to send direct mail to many eligible individuals, the federal program relied heavily on marketing efforts that were less direct and less personalized, including sending information to federal employees through agency benefits officers and working with affinity groups. Because neither OPM nor Partners has direct access to federal employees through e-mail, Partners has worked with more than 150 agency benefits officers to distribute program information to federal employees through e-mail, internal office mail, or other means. For example, Partners relies on agency benefits officers to send e-mails about the federal program. While Partners officials may be notified by agency benefits officers when they send program information, Partners is unable to determine whether all eligible federal employees receive this information. In addition, Partners has worked with several affinity groups, such as Federally Employed Women and the National Active and Retired Federal Employees Association, to educate their members about the need for long-term care insurance and to advertise in publications and at sponsored events. Through these efforts, Partners has gained direct access to the groups’ members. In the federal program’s fourth year, claims experience—the amount of claim payments per enrollee and the number of paid claims per enrollee— increased from that of the program’s third year, but remained lower than Partners’ expectations as established in its contract with OPM. This increase was generally consistent with trends since the federal program began in 2002. As we reported in March 2006, claims experience in the federal program’s first 3 years was lower than the initial expectations set by Partners. Our analysis of Partners’ data showed that claims experience also remained lower than expected for the federal program’s fourth year. As of March 31, 2006, the end of the federal program’s fourth year, the federal program had cumulatively paid 44 percent of the expected amount of claim payments per enrollee and 41 percent of the expected number of claims per enrollee, across the 4 years, as shown in table 2. Figure 1 shows the amount of claims payments per 10,000 enrollees. As of August 2006, Partners had not determined why the claims experience was lower than Partners’ expectations. Claims experience is one of many factors—such as interest rates and lapse rates—that affect the long-term financial outlook of a long-term care insurance program. While it is generally expected that the number of claims submitted in the first few years of a long-term care insurance program will be a small portion of the total number of claims submitted over time, the rate of claim submissions usually begins to increase after about 3 to 7 years, according to industry experts. Our findings from two reports together show that the Federal Long Term Care Insurance Program compared favorably with other plans, has a unique profit structure, and used marketing efforts that were generally similar to those of other plans, but faced a significant challenge. Specifically, our initial report found that the federal program offered benefits similar to those of other long-term care insurance products, usually at lower premiums for comparable benefits. In this, our second report, we examined other components of the federal program’s competitiveness, including the federal program’s profit structure and marketing efforts. We found that the federal program has a unique profit structure, created to compensate Partners for the risks it assumed for the program. The risks borne by Partners, however, are not as great as those assumed by carriers selling other plans because, unlike with other plans, the federal program’s assets are owned by the program, not by the insurer. Because of this structure, the program does not link Partners’ profits to the overall experience of the program. Rather, the program guarantees some profit payments, links some profit payments to OPM’s evaluation of Partners’ performance, and requires Partners to assume a potentially time- limited risk, after which all program assets and enrollees may be transferred to another carrier. Insurance carriers’ profits are linked to the amount of risk they bear, and Partners assumes less risk for insuring the federal program than do carriers for insuring other long-term care insurance plans. Therefore, the federal program’s profit payments would likely be lower than the profits realized by carriers selling other plans. In addition, while the federal program used marketing efforts that were generally similar to those used for other plans sold in the group market, the program faced a significant challenge in providing personalized marketing communications directly to eligible individuals and instead relied heavily on other marketing efforts. In our initial report we found that the federal program’s claims experience—the amount and number of claims payments per enrollee— was lower than expected in the first 3 years of the program. While it is generally expected that the number of claims submitted in the first few years of a long-term care insurance program will be a small portion of the total number of claims submitted over time, a program’s claims experience is one of several factors that may affect the long-term financial outlook of the program. In response to our recommendation in the initial report that OPM analyze the claims experience and assumptions affecting premiums to inform forthcoming contract negotiations, OPM indicated that it intended to provide updated information on claims experience and premium setting in its written recommendation to Congress before entering into the next contract for the administration of the Federal Long Term Care Insurance Program. Partners’ current contract with OPM for the administration of the federal program ends December 31, 2008. After reviewing a fourth year of claims data, we note that the program’s claims experience increased from that of the program’s third year, but still remains lower than Partners’ expectations. These results underscore the importance of our prior recommendations that OPM analyze the claims experience and assumptions as it considers its recommendations to Congress regarding a future contract. We provided a draft of this report to OPM and Partners. In its written comments, OPM generally agreed with our findings. OPM’s comments are reprinted in appendix II. With regard to the program’s unique profit structure, OPM stated that now that it has more operating experience with the program, it plans to reexamine the profit structure as it renegotiates or rebids the contract for the administration of the program. OPM agreed that the marketing efforts for the federal program are more challenging for Partners than for other insurers because, among other reasons, home addresses for federal employees are generally not available. OPM noted that this will continue to be a constraint for the program in the future. In addition, OPM highlighted, as we noted in our draft report, that the ratio of actual to expected claims experience has narrowed and stated that it would continue to closely monitor the claims experience of the program. We support this effort and continue to encourage OPM to analyze the program’s claims experience and ensure that premiums and actuarial assumptions about future claims reflect the experience of the program. In its comments, Partners highlighted certain distinct aspects of its profit structure that we noted in our draft report, including that Partners does not own federal program assets and that a profit payment is contingent on meeting specific performance standards that Partners characterized as exceptionally high for the insurance industry in general. Partners also stated that profit payments are paid only if the federal program’s assets are sufficient to cover the risks incurred by the program, as our draft report noted. Regarding the marketing efforts used for the federal program, Partners noted that the terrorist attacks of 2001 and the anthrax scare, which caused heightened security at federal buildings, added to the marketing challenge acknowledged in the report. We revised the report to reflect these circumstances. Finally, Partners commented, and as we noted in our draft report, that in addition to a program’s claims experience, premium rates are affected by a number of factors, including lapse rates and interest rates. OPM and Partners provided technical comments and clarifications, which we incorporated as appropriate. We are sending copies of this report to the Director of OPM and interested congressional committees. We will also provide copies to others on request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The Federal Long Term Care Insurance Program makes some profit payments to Long Term Care Partners LLC (Partners) according to the Office of Personnel Management’s (OPM) evaluation of Partners’ performance. Beginning in fiscal year 2006, OPM evaluates Partners’ performance each year on 21 short-term performance measures across 4 categories: administrative expense savings, customer service, enrollment experience, and responsiveness to OPM (see table 3). Every 3 years, OPM also evaluates Partners’ performance in two long-term performance measures: claims experience and return on investment (see table 4). In addition to the contact named above, Christine Brudevold, Assistant Director; Patricia Roy; Timothy Walker; and Rasanjali Wickrema made key contributions to this report. Long-Term Care Insurance: Federal Program Compared Favorably with Other Products, and Analysis of Claims Trend Could Inform Future Decisions. GAO-06-401. Washington, D.C.: March 31, 2006. Overview of Long-Term Care Partnership Program. GAO-05-1021R. Washington, D.C.: September 9, 2005. Long-Term Care Financing: Growing Demand and Cost of Services Are Straining Federal and State Budgets. GAO-05-564T. Washington, D.C.: April 27, 2005. | Spending on long-term care services--about $193 billion in 2004--is expected to rise. In 2000, Congress passed the Long-Term Care Security Act, requiring the federal government to offer long-term care insurance. To do so, the Office of Personnel Management (OPM) contracted with Long Term Care Partners LLC (Partners) to create the Federal Long Term Care Insurance Program. This is the second of two reports required by the act on the competitiveness of the federal program. GAO's March 31, 2006, report, Long-Term Care Insurance: Federal Program Compared Favorably with Other Products, and Analysis of Claims Trend Could Inform Future Decisions (GAO-06-401), found that the federal program's benefits and premiums compared favorably with other plans, but enrollment and claims experience--the amount and number of claims payments--were lower than Partners expected. In this report, GAO compared the federal program's profit structure and marketing efforts with those of other plans and updated its analysis of the program's claims experience. GAO reviewed the contract between OPM and Partners and interviewed OPM, Partners, and insurance carrier officials, as well as actuaries and industry experts. GAO also analyzed data on claim payments for the federal program since it began in 2002. The Federal Long Term Care Insurance Program has a unique profit structure that is explicitly defined in the contract between OPM and Partners. This profit structure consists of three distinct annual payments to Partners: (1) a guaranteed payment of 3.5 percent of the year's collected premiums, (2) a payment linked to OPM's evaluation of Partners' performance of up to 3 percent of the year's collected premiums, and (3) a guaranteed payment of 0.3 percent of the average annual assets of the program. These payments are separate from other payments made to cover the program's expenses. In contrast to the federal program, profits realized by carriers offering other long-term care insurance plans generally are not based on explicit profit structures, but rather on the experience of the programs they insure. The federal program's marketing efforts were generally similar to those used for other plans sold in the group market, but faced a significant challenge in sending information directly to eligible individuals. The federal program and other plans sold in the group market used such marketing efforts as mailing information to the homes of eligible individuals and hosting employee and retiree seminars. Of these efforts, carrier officials GAO spoke with explained that mailing to the homes of eligible individuals was critical to their marketing strategy. The federal program faced a significant challenge in mailing information to the homes of those eligible for the program because it initially did not have the home addresses for nearly all active federal civilian employees. Because of this challenge, the federal program relied heavily on marketing efforts that were less direct and less personalized, such as sending information to federal employees through agency benefits officers. The federal program's claims experience increased in the program's fourth year, but remained lower than the expectations established by Partners in its contract with OPM. This increase was generally consistent with trends since the federal program began in 2002. Overall, the federal program has paid 44 percent of the expected amount of claim payments per enrollee and 41 percent of the expected number of claims per enrollee. As of August 2006, Partners officials had not determined why the claims experience was lower than Partners' expectations. While it is generally expected that the number of claims submitted in the first few years of a long-term care insurance program will be a small portion of the total number of claims submitted over time, a program's claims experience is one of several factors that may affect its long-term financial outlook. The results of this analysis underscore GAO's prior recommendations that OPM analyze the claims experience and assumptions affecting premiums to inform forthcoming contract negotiations. In commenting on a draft of this report, OPM generally agreed with the report's findings. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Air power has played a pivotal role in America’s military force since World War I when aircraft were first used in combat. In World War II, it was indispensable to U.S. forces to achieve victory. After the war, the Department of the Navy invested in longer-range aircraft and larger aircraft carriers to provide worldwide coverage from the sea. With the proven success of air power and development of the intercontinental-range bomber, the Department of the Air Force was established in 1947, with the Air Force taking its place alongside the other three services. During the Cold War, America’s air power was a critical element of both its nuclear deterrent forces and its conventional combat forces. A massive U.S. aerospace industry developed, giving the United States a research, development, and production base that has dramatically advanced airframes, propulsion, avionics, weapons, and communications, and helped shape and broaden the role of air power in U.S. military strategy. Today the Department of Defense (DOD) has what some refer to as the “four air forces,” with each of the services possessing large numbers of aircraft. Air power includes not only fixed-wing aircraft but also attack helicopters, long-range missiles, unmanned aerial vehicles, and other assets that give the United States the ability to maintain air superiority and to project power worldwide through the air. During the Persian Gulf War, the unparalleled capabilities of these forces were demonstrated as U.S. and coalition forces dominated the conflict. Sweeping changes in the global threat environment, sizable reductions in resources devoted to defense, technological advancements in combat systems, and other factors have significantly affected DOD’s combat air power. Ensuring that the most cost-effective mix of combat air power capabilities is identified, developed, and fielded in such an environment to meet the needs of the combatant commanders is a major challenge. In October 1993, DOD reported on its bottom-up review of defense needs in the post-Cold War security environment. The review outlined specific dangers to U.S. interests, strategies to deal with the dangers, an overall defense strategy for the new era, and force structure requirements. The strategy called on the military to be prepared to fight and win two nearly simultaneous major regional conflicts, engage in smaller-scale operations, meet overseas presence requirements, and deter attacks by weapons of mass destruction. Table 1.1 shows the overall size and structure of the general purpose forces DOD determined are needed to execute the strategy and the approximate number of associated combat aircraft. DOD currently has about 5,900 such aircraft as it continues drawing down its forces. In addition to these fighter and attack aircraft, DOD has other important combat aviation elements, including over 1,500 specialized support aircraft, such as those used for refueling, command and control, reconnaissance, and suppressing enemy air defenses, and about 250 aircraft in its special operations forces. Appendix I identifies the principal aircraft, long-range missiles, and other weapons and assets that were covered by our review. Two key DOD documents that provide guidance concerning the planning for and use of combat air power are the Secretary of Defense’s Defense Planning Guidance and the Chairman of the Joint Chiefs of Staff’s current National Military Strategy dated 1995. These documents build on the strategy, plans, and programs identified in the Bottom-Up Review. According to the Defense Planning Guidance and the National Military Strategy, U.S. forces, in concert with regional allies, are to be of sufficient size and capabilities to credibly deter and, if necessary, decisively defeat aggression by projecting and sustaining U.S. power during two nearly simultaneous major regional conflicts. The services’ forces are also expected to be prepared to fight as a joint team, with each service providing trained and ready forces to support the commanders in chief (CINC) of the combatant commands. U.S. air power is to be able to seize and control the skies, hold vital enemy capabilities at risk throughout the theater, and help destroy the enemy’s ability to wage war. Air power is also expected to provide sustained, precision firepower; reconnaissance and surveillance; refueling; and global lift. The ability of combat aircraft to respond quickly to regional contingencies makes them particularly important in the post-Cold War era. Both documents discuss the criticality of enhancements to existing systems and the selected modernization of forces to DOD’s ability to carry out the military strategy. Each expresses concerns about upgrading and replacing weapon systems and equipment under constrained budgets. In recognition of the costly recapitalization planned and the projected budgetary resources to support it, the Chairman’s strategy states that major modernization programs involving significant investment are to be undertaken “only where there is clearly a substantial payoff.” A new document—Joint Vision 2010—provides the military services a common direction in developing their capabilities within a joint framework. Like the guidance and strategy documents, the vision document cites the need for more efficient use of defense resources. It stresses the imperativeness of jointness—of integrating service capabilities with less redundancy in and among the services—if the United States is to retain effectiveness when faced with flat budgets and increasingly more costly readiness and modernization. The authority of the military departments to acquire air power and other assets stems from their broad legislative responsibilities to prepare forces for the effective prosecution of war (Title 10 U.S. Code). DOD Directive 5100.1, which identifies the functions of the DOD and its major components, authorizes the military departments to develop and procure weapons, equipment, and supplies essential to fulfilling their assigned functions. Under the directive, the Army’s primary functions include the preparation of forces to defeat enemy land forces and seize, occupy, and defend land areas; the Navy’s and/or Marine Corps’ functions include the preparation of forces to gain and maintain general naval supremacy and prosecute a naval campaign; and the Air Force, the preparation of forces to gain and maintain air supremacy and air interdiction of enemy land forces and communications. The Marine Corps is also expected to conduct amphibious operations. All services are authorized to develop capabilities to attack land targets through the air to accomplish their primary missions. The directive also states that the military departments are to fulfill the current and future operational requirements of the combatant commands to the maximum extent practical; present and justify their respective positions on DOD plans, programs, and policies; cooperate effectively with one another; provide for more effective, efficient, and economical administration; and eliminate duplication. The individual services have always had the primary role in weapons acquisition. In an attempt to strengthen the joint orientation of the Department, Congress enacted the Goldwater-Nichols Department of Defense Reorganization Act of 1986. This act, which amended title 10, gave the Chairman of the Joint Chiefs of Staff and the combatant commanders stronger roles in Department matters, including weapons acquisition. It designated the Chairman as principal military adviser to the President, the National Security Council, and the Secretary of Defense and gave him several broad authorities. For example, the Chairman is expected to provide for strategic direction of the armed forces, prepare strategic plans, perform net assessments of the capabilities of U.S. and allied armed forces compared with those of potential adversaries, and advise the Secretary on the requirements, programs, and budgets of the military departments in terms of the joint perspective. Regarding this latter responsibility, the Chairman is expected to (1) provide advice on the priorities of requirements identified by the commanders of the combatant commands, (2) determine the extent to which program recommendations and budget proposals conform with the combatant commands’ priorities, (3) submit alternative program recommendations and budget proposals within projected resource levels to achieve greater conformance with these priorities, and (4) assess military requirements for major defense acquisition programs. In addition to these responsibilities, the National Defense Authorization Act for fiscal year 1993 directed the Chairman to examine what DOD can do to eliminate or reduce duplicative capabilities. Assisting the Chairman in providing the Secretary advice on military requirements and the programs and budgets of the military departments is the Joint Requirements Oversight Council (JROC) and the Joint Staff, which are subject to the authority, direction, and control of the Chairman. Within the Office of the Secretary of Defense (OSD), the Office of the Director of Program Analysis and Evaluation provides, in part, analytical support to the Secretary in the management and oversight of service programs and budgets. The overall objective of this review was to assess whether the Secretary of Defense has sufficient information from a joint perspective to help him decide whether new investments in combat air power should be made, whether programmed investments should continue to be funded, and what priority should be given to competing programs. To gain a broad perspective on the context in which these decisions are made, we sought to determine (1) how U.S. air power capabilities have changed since the end of fiscal year 1991; (2) what potential threat adversary forces pose to U.S. air power; (3) what contribution combat air power modernization programs will make to aggregate U.S. capabilities; and (4) how joint warfighting assessments are used to support the Secretary in making air power decisions. The scope of our review included (1) fighter and attack aircraft, including attack helicopters and long-range bombers equipped for conventional missions; (2) key specialized support aircraft that enhance the capability of combat aircraft; (3) munitions employed by combat aircraft; and (4) other major systems—particularly long-range missiles, theater air defense systems, and unmanned aerial vehicles—that perform missions traditionally assigned to combat aircraft. Our scope did not encompass assets dedicated primarily to airlift, such as the C-17 and V-22 aircraft, and U.S. special operations forces. Also, the potential contribution of allied forces was not considered. We reviewed in detail six key mission areas in which combat air power plays a prominent role: performing offensive and defensive operations to achieve and maintain air superiority in areas of combat operations, interdicting enemy forces before they can be used against friendly forces, providing close support for ground forces by attacking hostile forces in close proximity to friendly forces, suppressing enemy air defenses by jamming or destroying enemy air refueling combat aircraft in the air to sustain combat operations, and performing surveillance and reconnaissance to obtain intelligence data for combat operations. In conducting these reviews, we reviewed numerous reports, studies, and other documents containing information on these missions and the primary platforms and weapons used. We discussed capabilities, requirements, force structure, and modernization issues with officials and representatives of various offices within OSD, the Organization of the Joint Chiefs of Staff, the military services, and the operational commands. We compared and contrasted performance data on current and planned weapon systems by mission area to acquire a good understanding of the joint capabilities of the military forces to perform the missions and to identify overlaps and gaps in capabilities. Separate reports on the interdiction, close support, suppression of enemy air defenses, and air refueling reviews have already been issued, while our reports on air superiority and surveillance and reconnaissance are still being prepared. A listing of the four issued reports and of other GAO reports related to this body of work is included at the end of this report. We supplemented the six mission reviews with more detailed assessments of (1) recent and planned changes in the capabilities of U.S. forces and of the current and projected capabilities of potential adversaries to counter U.S. air power and (2) the military advice on joint requirements and capabilities being developed through the Chairman of the Joint Chiefs of Staff for the Secretary of Defense. For information on changes in U.S. capabilities, we drew upon information gathered on the six mission reviews. We also used examples from our other published reports on major DOD modernization programs to illustrate our findings. For information on current and projected capabilities of potential adversaries, we reviewed reports of the Central Intelligence Agency, Defense Intelligence Agency, and Arms Control and Disarmament Agency and discussed threat information with intelligence agency personnel. To assess information being developed for the Secretary of Defense on joint air power requirements and aggregate capabilities of the services to meet those requirements, we evaluated the JROC and its supporting joint warfighting capabilities assessment (JWCA) process, which assist the Chairman in carrying out his responsibilities. We discussed the functioning of this process and air power issues being examined with Joint Staff officials who oversee the process as well as assessment team representatives from the Joint Staff and OSD. We reviewed the May 1995 report by the independent Commission on Roles and Missions of the Armed Forces. We also discussed the report with Commission staff and reviewed documents the Commission developed or acquired. We conducted this review from May 1994 through June 1996 in accordance with generally accepted government auditing standards. While force downsizing may give the appearance of a loss in capability, the United States continues to retain in its conventional inventory about 5,900 modern fighter and attack aircraft, including 178 long-range bombers and 1,732 attack helicopters, and over 1,500 specialized support aircraft. It also has growing inventories of advanced precision air-to-air and air-to-ground weapons for its combat aircraft to carry and an expanding arsenal of accurate long-range surface-to-surface missiles to strike ground targets. Inventory levels for the aircraft included in our review are shown in appendix II. DOD has spent billions of dollars in recent years to make its current frontline combat aircraft and helicopters more efficient and effective. These enhancements include improved navigation, night fighting, target acquisition, and self-protection capabilities as well as more aircraft capable of using advanced munitions. Specialized support aircraft used for air refueling and surveillance and reconnaissance, which are vital to the effectiveness of combat aircraft, have also been improved, while forces for suppressing enemy air defenses are being restructured. Additionally, advances in the ability of U.S. forces to identify targets and communicate that information quickly to combatant units should further enhance the capabilities of current forces. The size and composition of the U.S. combat air power force structure have changed considerably since fiscal year 1991, the year the Persian Gulf War ended. Cutbacks in the number of combat aircraft adopted by the Bush administration and further cutbacks by the Clinton administration in its 1993 Bottom-Up Review are scheduled to be completed in 1997. While the number of fighter and attack aircraft, including B-1B bombers and attack helicopters, is being reduced about 28 percent from 1991 levels, other new and emerging elements of combat air power, such as long-range missiles and theater air defense forces, have grown in number and capability. Specialized support aircraft have experienced varying levels of change in their inventory. Changes in aviation needs since the end of the Cold War, coupled with cuts in defense spending, have led DOD to reduce its combat aircraft inventory. These changes have been most pronounced for Air Force, Navy, and Marine Corps fixed-wing fighter and attack aircraft and Air Force bombers—from about 6,400 in 1991 to about 4,160 in 1996. DOD considers about 65 percent of these aircraft as authorized to combat units to perform basic combat missions and 35 percent of them as backup aircraft maintained for training, testing, maintenance, and attrition replacement reserves. Figure 2.1 shows the change in the total inventories of these types of aircraft from 1991 to 1996. This smaller combat force structure has been accomplished primarily by retiring older aircraft that are often expensive to operate and maintain, such as the Navy and Marine Corps A-6 medium bomber and A-7 light attack plane and the Air Force A-7, F-4 fighter, and F-111 strike aircraft. At the same time, many newer model aircraft have entered the fleet since the Persian Gulf War, including about 70 F-15E strike fighters, about 250 F-16 multimission fighters, and 200 F/A-18 fighter and attack aircraft. Changes in inventory levels by aircraft model are shown in appendix II. Some important capabilities are being retired as these older aircraft are removed from the inventory. For example, the Navy will lose the payload, range, and all-weather capability of the A-6, and the Air Force will lose the speed and nighttime-precision bombing capability of the F-111. DOD believes, however, that it can do without these assets, given the dangers it expects to face and the high costs of upgrades, operations, and support that it can avoid by retiring these aircraft. Attack helicopter inventories have fallen only 4 percent—1,811 to 1,732. Many of the older helicopters in the 1991 inventory have been replaced by newer more capable ones. The Army has added about 150 AH-64A Apache attack helicopters and nearly 300 OH-58D Kiowa Warrior armed reconnaissance helicopters to its fleet, and the Marine Corps has added over 70 AH-1W Cobras to its fleet. At the same time, both services have retired nearly 600 older AH-1 Cobras. Figure 2.2 shows attack helicopter inventory changes. From fiscal years 1991 through 1996, about $4.5 billion was appropriated to acquire long-range missiles, and the combined inventories of these missiles more than tripled from 1,133 to over 3,750. (This does not include conventional air-launched cruise missiles as inventory data on those weapons is classified.) The Navy Tomahawk land-attack cruise missile and the Army tactical missile system (ATACMS) have been used to attack a variety of fixed targets, including air defense and communications sites, often in high-threat environments. The Gulf War and subsequent contingency operations, including, most recently, September 1996 attacks on Iraqi military installations, have demonstrated that long-range missiles can carry out some of the missions of strike aircraft while they reduce the risk of pilot losses and aircraft attrition. Although the number of ships (including attack submarines) capable of firing the Tomahawk grew only slightly—from 112 to 119—between 1991 and 1996, the Navy’s overall ability to fire these land-attack missiles has grown considerably. This is because a greater number of the ships capable of firing the missile are now surface ships and surface ships are able to carry more Tomahawks than submarines. The Navy has also demonstrated that the ATACMS can be fired successfully from surface ships. This offers the possibility of future enhancements to the Navy’s long-range missile capabilities. DOD has not reduced its inventories of combat support aircraft used for nonlethal suppression of enemy air defenses (SEAD) and air refueling to the same extent as its fixed-wing combat forces. Inventory levels of specialized surveillance and reconnaissance aircraft have been reduced significantly but will be replaced by other reconnaissance assets. Figure 2.3 shows the changes in the inventory levels for these type of specialized aircraft. Percent of 1991 fleet size The 5-percent reduction in specialized nonlethal SEAD aircraft reflects a decline of 10 aircraft (from 188 in fiscal year 1991 to 178 in fiscal year 1996); the 16-percent reduction in air refueling aircraft reflects a decline of 171 aircraft (from 1,046 to 875); and the 44-percent reduction in surveillance and reconnaissance aircraft reflects a decline of 415 aircraft (from 943 to 528). Most of the latter decline was due to the retirement of 184 Air Force RF-4C penetrating reconnaissance aircraft and 159 Navy P-3 antisubmarine warfare aircraft. The Air Force is making a transition to greater use of unmanned aerial vehicles to provide reconnaisssance over enemy airspace and is equipping some F-16 fighters with sensors for such missions. The submarine threat to U.S. forces has diminished since the fall of the Soviet Union, reducing the need for antisubmarine warfare assets. Though DOD’s aviation force is smaller today, many of the combat aircraft are newer and more highly capable, allowing for greater flexibility in the employment of force across a broader range of operating environments. Acting on lessons learned from the Persian Gulf War and recommendations made by organizations such as the Defense Science Board, DOD has taken steps to make many of the remaining combat aircraft more capable, to include improvements such as autonomous navigation, night fighting, target acquisition, and self-protection and the employment of advanced munitions. Based on aircraft performance during the Gulf War, DOD has identified these capabilities as vital to the efficiency and effectiveness of attack aircraft. Advances in miniaturizing and modularizing subsystems have allowed DOD to enhance aircraft capabilities within existing airframes, overcoming concerns about space and weight limitations. Theater air defense systems are also being improved as concern increases about cruise and ballistic missiles armed with weapons of mass destruction. Similarly, DOD has enhanced the capabilities of specialized support aircraft and long-range missiles and plans further improvements to these systems. Congress has mandated that all DOD aircraft be able to use the global positioning system by the end of fiscal year 2000. This system allows for precise positioning and navigation across a broad range of missions, contributing to better situational awareness and more efficient use of forces. It also can be used to deliver munitions accurately in all weather conditions. The number of aircraft with night fighting and target acquisition capabilities—both critical to the flexibility and effectiveness of combat aircraft—has increased significantly since fiscal year 1991. What constitutes a night fighting capability varies between platforms. During the Gulf War, night capability for the F-15E consisted of LANTIRN (low altitude navigation targeting infrared for night) targeting pods only. These pods give pilots the ability to accurately target weapons day or night in adverse weather. Night-capable F-16s used during the Gulf War had LANTIRN navigation pods only. Today, F-15E and F-16 night capability consists of aircraft using both LANTIRN targeting and navigation pods. Gulf War night capability for the F/A-18 consisted of either a navigation or targeting forward-looking infrared pod and/or night vision goggles. No night-capable A-10 or AV-8B Harrier aircraft were used during the Gulf War, but today A-10 pilots can use night vision goggles, and the night attack AV-8B is equipped with a navigation forward-looking infrared pod, and its pilots are equipped with night vision goggles. The number of night-capable helicopters has grown by more than 500 as more Apaches and Kiowa Warriors have entered the Army fleet and more AH-1W Cobra helicopters have entered the Marine Corps fleet. The change in night fighting capability since 1991 for selected aircraft types is shown in figure 2.4. Today, more than 600 F-15Es and F-16s can use all or part of LANTIRN for night fighting. The Air Force plans to equip 250 F-16s with cockpit changes that will enable their pilots to use night vision goggles to complement the LANTIRN capability. Inventories of night-capable F/A-18 aircraft have grown by more than 350 from 1991 to 1996, as DOD invested hundreds of millions of dollars in forward-looking infrared pods. More than 250 A-10 attack aircraft have been equipped for night operations. Although about 355 night-capable Navy A-6 and Air Force F-111F aircraft will be gone from the inventory by the end of fiscal year 1996, overall, DOD increased the number of night-capable combat aircraft by over 900. Beginning in 1996, many Navy F-14 aircraft started receiving LANTIRN and night vision cockpit modifications. To enhance the survivability of attack aircraft, the services are equipping them with new self-protection jammers, upgraded radar warning receivers, and increased expendable countermeasures. In past work, we have noted performance problems with many of these systems. In addition, the Air Force is currently adding towed decoys to further enhance the survivability of its F-16s. Also, the Marine Corps plans to (1) add a missile warning system to its AV-8B and AH-1W aircraft to alert aircraft crews of a missile attack and (2) install the combined interrogator transponder on its F-18C/D aircraft to enable crews to identify other aircraft beyond visual range as either friendly or hostile. This identification capability is expected to reduce the incidence of fratricide. During the Gulf War, only the Air Force F-15 had this capability. Equipping aircraft with the subsystems needed to employ advanced munitions is a critical force enhancement that DOD considers necessary to successfully execute its military strategy. DOD is making a sizable investment in such weapons. For example, it estimates it will spend over $15 billion on five major precision-guided munitions (PGM) for its combat aircraft—the joint stand-off weapon (JSOW), the joint direct attack munition (JDAM), the Longbow Hellfire missile, the sensor fused weapon, and the joint air-to-surface standoff missile. Additionally, other PGMs for aircraft valued at nearly $4 billion entered the inventory from 1992 through 1996. More than nine times as many F-16s and, with the growth in F-15E inventory, one-and-a-half times as many F-15Es can employ PGMs in 1996 than could do so in 1991. Overall, DOD estimates it has about twice as many aircraft capable of employing these types of weapons as it did during the Gulf War. The Hellfire missile has given more Army and Marine Corps helicopters a PGM capability. Future PGM development will concentrate on developing standoff weapons. Although some PGM capability is being lost through retirement of the Air Force F-111F and Navy A-6E, DOD expects to retain roughly the current level of capability into the next century. In response to the growing threat of theater ballistic missiles that are used in regional conflicts and can be armed with weapons of mass destruction, DOD is increasing funding to upgrade existing and planned air defense systems—a critical component of U.S. air superiority forces—and plans more advanced developments as the threat evolves. The Army’s Patriot PAC-3 and upgrades to the Navy’s area defense system will provide the near-term response to this threat. Upgrades to the Air Force E-3 and Navy E-2C surveillance and reconnaissance aircraft should also enhance capabilities to counter the long-range cruise missile threat through improved detection of cruise missiles en route to their targets. The Space-Based Infrared System is also being developed to aid in missile warning and missile defense. DOD plans to spend over $6 billion during the next 5 years to develop future theater missile defense systems, including the theater high-altitude air defense system. Since the Gulf War, the Navy has improved its Tomahawk missile’s operational responsiveness, target penetration, range, and accuracy. It has added global positioning system guidance and redesigned the warhead and engine in the missile’s block III configuration that entered service in 1993. The Navy will upgrade or remanufacture existing Tomahawk missiles with (1) jam-resistant global positioning system receivers and an inertial navigation system to guide the missile throughout the mission and (2) a forward-looking terminal sensor to autonomously attack targets. These missiles are expected to enter service around 2000. The ATACMS block IA, scheduled for delivery in fiscal year 1998, is an upgrade that will nearly double the range of the missile and increase its accuracy. More advanced versions of the ATACMS—block II and IIA—will use the brilliant anti-armor submunition, which is scheduled to enter service after the turn of the century. This submunition will give the missile the ability to acquire, track, and home on operating armored vehicles deep into enemy territory. The services are also selectively upgrading their specialized aviation assets for surveillance and reconnaissance, SEADs, and air refueling. Coupled with force restructuring, DOD expects these upgrades to enhance combat operations and expand opportunities to perform joint operations and provide cross-service support. DOD has identified battlefield surveillance as a critical force enhancement needed to improve the capabilities, flexibility, and lethality of general purpose forces and ensure the successful execution of the National Military Strategy. The Air Force and Navy have improved existing sensors that enhance the capability of current surveillance and reconnaissance aircraft—the U-2R, RC-135V/W, and EP-3E—to provide intelligence support to combat forces. Heading the list of battlefield surveillance improvements, as shown in the Secretary of Defense’s annual report, is the E-8C Joint Surveillance Target Attack Radar System. With its synthetic aperture radar and moving target indicator, this system is designed to provide wide area, real-time information on the movement of enemy forces to air and ground units. Also, DOD has invested hundreds of millions of dollars, and plans to invest about $1.5 billion more over the next 5 years, to develop and procure unmanned aerial vehicles. DOD expects that these vehicles will provide complementary battlefield reconnaissance and reduce the need for manned reconnaissance aircraft to penetrate enemy airspace. The Air Force is improving its E-3 and the Navy its Hawkeye E-2C aerial surveillance and control aircraft in their roles as early warning and airborne command and control platforms. For the E-3, $220 million was appropriated for fiscal year 1996 to improve the aircraft’s capabilities. Annual modification expenditures for the E-2C more than doubled in 1995 from those in 1991, despite a shrinking inventory. The Air Force RC-135 and Navy EP-3E signals intelligence aircraft are also being upgraded to improve the collection and dissemination of intelligence data. SEAD—the synergistic use of radar and communications jamming and of destruction through the use of antiradiation missiles—is recognized to be a critical component of air operations, as it improves the survivability of other U.S. aircraft in combat areas. In establishing funding priorities, DOD has decided to retire certain Air Force SEAD aircraft—the F-4G and EF-111A jammer—and replace them with a new Air Force system, the high speed anti-radiation missile (HARM) targeting system on the F-16C, and an existing Navy electronic warfare aircraft, the EA-6B. We expressed serious concerns about the prudence of these decisions in an April 1996 report, as the decisions were made without an assessment of how the cumulative changes in SEAD capabilities would affect overall warfighting capability.Although DOD recognizes that it must adjust tactics and operations to account for performance differences between current and replacement systems, it believes that it can meet the Air Force’s SEAD needs into the next century by selectively upgrading the EA-6B and the HARM targeting system. When the Air Force completes the retirement of its most capable lethal SEAD aircraft, the F-4G, at the end of fiscal year 1996, it will primarily rely on 72 F-16C aircraft equipped with the HARM targeting system. However, the EA-6B, which will replace the EF-111 in the Air Force’s nonlethal SEAD role, can also target and fire HARM missiles. It also has a communications-jamming capability that will allow it to supplement the Air Force’s heavily burdened communications jammer, the EC-130H Compass Call. The Air Force has also decided to upgrade its EC-130H fleet to meet new threats. Recognizing that too few EA-6B aircraft may be available to meet both Air Force and Navy needs, DOD plans to retain 12 EF-111s in the active inventory through the end of 1998, when additional upgraded EA-6Bs should be available. Though the performance of the two platforms is not the same, and the multiservice use of the same platform will entail some logistics support challenges, the Chairman of the Joint Chiefs of Staff believes that retiring the EF-111 represents a “prudent risk” that DOD can take to more fully fund higher priority needs. DOD believes the SEAD mission is important and will retain about 140 radar and communications jamming aircraft and over 800 aircraft able to fire antiradiation missiles in its force structure. From the end of 1991 through 1996, the Air Force will have replaced the engines on 126 KC-135 tankers at a cost of over $20 million per aircraft. These reengined aircraft offer up to 50 percent greater fuel off-load capacity and quieter, cleaner, and more fuel-efficient performance with lower maintenance requirements. The Air Force is considering the same upgrades to about 140 more KC-135s. Funding has been programmed to field a multi-point refueling capability that is expected to enhance cross-service operations. About $100 million has been appropriated to modify 20 KC-10 and 45 KC-135R tankers to carry wing pods that will enable these Air Force aircraft to refuel Navy and Marine Corps aircraft. About $160 million is needed to complete the KC-135 modifications. In 1991, no operational KC-10 or KC-135 tankers had this capability. There has been debate as to whether the success of the coalition air forces during the Gulf War was an evolutionary or revolutionary advancement in the conduct of air warfare. While many combat technologies—stealth, night fighting, and PGMs—proved valuable, delays in the processing of intelligence and targeting information, and difficulty in communicating that information to the forces that could use it, minimized the full impact of advanced combat technologies. The Chairman of the Joint Chiefs of Staff has stated that the development of a “system of systems”—the integration of intelligence, surveillance, and reconnaissance with precision force through the more rapid processing and transfer of targeting and other information—offers the greatest enhancement in joint warfighting capability. The Defense Science Board reported in 1993 that improvements in the effectiveness of combat aircraft would be fastest and most significant not through the purchase of new aircraft but through improvements to the interoperability and integration of existing assets. DOD believes the ability of sensor platforms to transfer target information quickly to air, ground, and naval units armed with PGMs will act as a force multiplier, resulting in greater lethality and possibly a reduction in force structure and munitions requirements. The $2 billion Joint Tactical Information Distribution System, for example, will net together command and control centers, sensor platforms, fighter aircraft, and surface air defense units to improve performance in the high density air combat environment, providing near real-time secure data and voice communications from sensor to shooter platforms. The Defense Airborne Reconnaissance Office is developing imagery processing standards to enable the processing of imagery from multiple sensors. Satellite communications systems being fielded provide secure communications for command authorities to command and control tactical and strategic forces of all services at all levels of conflict. The Navy’s cooperative engagement capability is being developed to integrate surface and air defenses, across service lines, over land and sea. The goal is to link all air defense forces to provide the faster transfer of targeting information. Advanced munitions will also offer benefits across mission lines. By reducing sortie requirements and allowing for weapons delivery beyond the range of enemy air defenses, advanced munitions could possibly reduce the need for air refueling as well as dedicated SEAD. The Defense Science Board noted in its 1993 report that during the Gulf War, a ton of PGMs typically replaced 12 to 20 tons of unguided munitions for many types of targets on a tonnage-per-target-kill basis, thereby reducing tactical aircraft sorties and airlift requirements. Also, for each ton of PGMs, the Board estimated that as much as 35 to 40 tons of fuel could be saved due to the decrease in overall air operations. The downsizing of U.S. forces in recent years has not necessarily translated into a loss of combat air power. While the number of combat aircraft has been reduced, these reductions have been largely offset by an expanded group of assets and capabilities available to the combatant commands. Capabilities have improved because (1) a larger percentage of the combat aircraft force is now able to perform multiple missions; (2) key performance capabilities of combat aircraft, such as night fighting, are being significantly enhanced; and (3) the growth in inventories of advanced long-range missiles and PGMs is adding to the arsenal of weapons and to the options available to attack targets. Moreover, the continuing integration of service capabilities in such areas as battlefield surveillance; command, control, and communications; and targeting should enable force commanders to further capitalize on the aggregate capabilities of the services and maintain extensive air power capabilities despite force-level reductions. Potential adversaries possess two types of capabilities that constitute a threat to U.S. air power accomplishing its objectives: a defensive (air defense) capability using aircraft and surface-based air defense forces and an offensive attack capability employing aircraft and cruise and ballistic missiles. The current air defense capabilities of potential adversaries, in terms of both aircraft and air defense systems, are unlikely to prevent U.S. air power from achieving its military objectives. The conventional offensive threat is judged to be low until at least early in the next century. Furthermore, efforts by potential adversaries to modernize their forces will likely continue to be inhibited by declines in the post-Cold War arms market, national and international efforts to limit proliferation of conventional arms, and the high cost of advanced weapons. These adversaries are also experiencing shortfalls in training, maintenance, and logistics, and many of them have weaknesses in their military doctrine. Potential regional adversaries currently possess defensive and offensive weapons considered technologically inferior to U.S. forces. Improvements in these capabilities is dependent on the acquisition of weapons and technology from outside sources. The current air defense capabilities of potential adversaries have limitations. Regarding aircraft, these nations have only small quantities of modern fighters for air defense. The bulk of their air forces are older and less capable, and their fleets are not expected to be bolstered by many modern aircraft. Similarly, for their surface-to-air defense forces, these nations tend to rely on older systems for high-altitude long-range defense and to use the more modern and effective systems, when available, at low altitudes and short ranges. The most prevalent threats are assessed to be overcome by U.S. aircraft with the use of tactics and countermeasures. Furthermore, the location of the most threatening assets tends to be known. For offensive operations, like defense forces, the bulk of potential adversaries’ aviation forces, which may comprise significant numbers, are older and less capable aircraft. The same assessment applies to long-range missile capabilities. Some potential adversaries possess significant quantities of ballistic missiles, but they tend to be of low technology and of limited military use. The potential land-attack cruise missile capabilities of these nations are low and are not expected to increase in sophistication until the middle of the next decade, if at all. Though the threat to military forces from conventionally armed missiles is low, the possibility that such weapons could be used for political purposes—and possibly armed with nuclear, biological, or chemical warheads—may affect the employment of U.S. forces. Air defense is a high priority of potential adversaries, and it is believed most potential adversaries are trying to improve their effectiveness and survivability by upgrading existing systems, purchasing more modern weapons, and using camouflage and decoys. These improvements, if achieved, could delay U.S. combat air power from achieving air superiority quickly and cause higher U.S. and allied casualties. These nations would also like to improve their aviation and ballistic and cruise missile capabilities. However, they currently lack the capability to develop and produce the advanced systems that would allow them to significantly enhance air defense and long-range offensive capabilities. Therefore, advances will likely be confined to upgrades of existing equipment and the possible acquisition of advanced air defense systems from outside sources. Several factors, however, make that prospect less likely. Among these are (1) the modern arms market, which has changed since the end of the Cold War; (2) the high cost of modern weapons, given potential adversaries’ economic capability; and (3) a growing global conventional arms control environment. In technical comments on this report, DOD noted that important advances are being made in potential threats, in particular in advanced surface-to-air missile systems such as the SA-10. DOD said these threats, which are either in development by potential adversaries or available for sale on the international market, are expected to significantly affect U.S. capabilities to employ air power in the future. We do not discount these potential threats. However, DOD’s projections of the ability of potential adversaries to employ such systems, known weaknesses of future threat systems, the acquisition of advanced standoff weapons for U.S. aircraft, and planned improvements to existing U.S. forces, when taken together, suggest that this threat is manageable. Furthermore, in subsequent discussions, DOD clarified that it did not intend for its comment to suggest that U.S. electronic warfare systems could not defeat future threats but that DOD prefers to continue to maintain a variety of capabilities, including additional stealth aircraft, to meet its objectives. The volume of arms transfers has fallen significantly in recent years and is not expected to reach its former levels any time soon. The principal nations selling and buying arms are the United States and its allies. Since potential adversaries depend on foreign technology to improve their capabilities, changes in the arms market could have a substantial effect on their ability to modernize their forces. The value of the cross-border transfer of conventional arms fell by more than two-thirds from 1987 to 1994—from almost $79 billion to $22 billion in 1994 dollars worldwide, according to the latest available data from the U.S. Arms Control and Disarmament Agency (ACDA). The share of the international arms market held by the former Soviet Union, now shown as Russia, and China has fallen from a combined 40 percent to about 10 percent over the same period. At the same time, the share of the arms market held by the United States and several close allies has grown from 43 percent to 79 percent of all transfers (See fig. 3.1). During the Cold War, the Soviet Union was a primary supplier of arms to the Third World, often providing weapons without charging for them. Now Russia generally requires payment, often in hard currency, for the weapons it transfers. The latest available ACDA data show worldwide Soviet Union/Russian transfers fell from $23.1 billion in 1987 to $1.3 billion in 1994. China also reduced its arms exports over that period. Agreements for future deliveries also fell for Russia and China from the levels of the 1980s. However, Russia has increased the value of its agreements for future weapons deliveries since 1992. While overall arms transfers have fallen, those who have been buying have shown a preference for American and Western European equipment. Buyers prefer proven high quality weapons that are accompanied by good logistics support. For the most recent 3-year period available, 1992 to 1994, the arms market in terms of actual arms transfers has been dominated on the seller side by the United States and a few of its North Atlantic Treaty Organization (NATO) allies, and on the buyer side by allies of the United States in Europe, the Middle East, and East Asia. Transfers to the Middle East by supplier are shown in figure 3.2. As figure 3.2 shows, 86 percent of the value of actual deliveries of conventional arms to the Middle East for the period shown originated from the United States and four close allies—the United Kingdom, France, Canada, and Germany—and were primarily to members of the Gulf War coalition. Only about 14 percent came from Russia, China, and other sources, and some of that total also went to U.S. Gulf War allies in the Middle East. The pattern for arms sales agreements for future deliveries is similar; that is, the United States and its NATO allies are the dominant suppliers (see fig. 3.3). From 1992 to 1994, almost 92 percent of the value of sales agreements for future conventional arms deliveries to the Middle East were made by the United States, the United Kingdom, France, and Germany. Only 8 percent of agreements for future Middle East deliveries originated from Russia, China, or the rest of the world. The decline in transfers has been accompanied by the contraction of the arms industries of many weapons exporters in terms of both production and development. Arms manufacturing nations have tended to reduce the size of their own armed forces and their arms production capabilities since the Cold War ended. Development programs have been slowed in many cases, and major weapon production programs have been subject to delay, reduction, or cancellation. Although arms producers want to continue exports to protect domestic jobs and reduce the cost of modernizing their own forces, they are presently finding few large buyers. Arms deliveries to India have fallen substantially and transfers to Pakistan have fallen since 1990. The buying spree of America’s Persian Gulf allies has also slowed. At the same time, potential adversaries that may desire advanced weapons have not been obtaining them or placing orders with producers, in part because of economic constraints and internationally imposed limits on arms transfers. While the development of more capable weapons is likely to continue, the ability of potential adversaries to obtain these weapons in large numbers is not assured. The cost of modern high technology weapons continues to grow, while the ability of these countries to afford such systems is constrained. Additionally, international efforts to restrict arms and technology proliferation have been increasing in terms of both the types of technology targeted and the number of exporting nations agreeing to restrictions. The high technology weapons that could seriously threaten U.S. air power are expensive, no matter what the source. For example, each aircraft that is part of the original Eurofighter 2000 tactical aircraft contract is projected to cost about $75 million. An advanced surface-to-air system like the Patriot PAC-3 costs over $100 million per battery. Nations that depend on export sales of selected commodities to finance their militaries or that have closed economies could find it much harder to afford high technology systems. The more likely course for these nations is to upgrade their existing equipment, either by mixing new components with their old systems or through other upgrade programs from arms suppliers. Although such attempts could offer new challenges to the United States and its allies, they would be less threatening than more modern equipment. Part of the National Military Strategy entails increasing cooperation with regional allies while containing regional powers not friendly to the United States and its allies. Conventional arms control is part of this strategy. Some international agreements/collaborations and domestic weapons export policies are designed to limit the opportunities for regional powers to acquire advanced weapons. For example, the United Nations imposed sanctions on more than one nation in the 1990s, prohibiting transfers of weapons or commercial technology to these nations that could be used for military purposes. ACDA data show no measurable arms transfers to nations under U.N. sanctions since sanctions were imposed. A key collaboration, the Wassenaar Arrangement, took effect in December 1995. This arrangement—the goal of which is complete disclosure of arms transfers—has 28 member nations. This cooperative effort replaces the Coordinating Committee for Multilateral Export Controls (COCOM), the Cold War regime that limited arms and technology transfers to Soviet bloc nations. The Wassenaar Arrangement has identified several nations that are to be excluded from arms exports or exports of potential dual-use technology—that is, technology with military as well as commercial applications. It is hoped that this agreement will allow major weapons producers to target volatile regions for restraint in the transfer of arms. Although Wassenaar does not constitute a formal treaty, major arms manufacturing countries have agreed to its arms transfer restrictions as part of their country’s domestic arms transfer policies. A third major arms control agreement, the Missile Technology Control Regime, was created in 1987 and is designed to specifically limit the transfer of missiles—including cruise and ballistic—and missile-related and dual-use technology. Original members were major NATO partners and Japan, but the Regime has been expanded to include more than 20 nations. The combination of U.N. sanctions, the Wassenaar Arrangement, and the Missile Technology Control Regime represent an obstacle to potential adversaries that seek to acquire highly capable weapons and advanced technology. Again, ACDA data indicate sharply reduced transfers to these nations in recent years, and there are no indications these agreements will be relaxed significantly in the near future. In fact, according to the State Department, the United States intends to strengthen the Wassenaar Arrangement. Given that Wassenaar members are the major arms producers and that potential adversaries generally lack an indigenous advanced weapon development and production capability, the potential for significantly inhibiting potential adversaries from improvements in capability is, to a great extent, in these member nations’ hands. Potential adversaries have not demonstrated the commitment to logistics support and training that the U.S. military considers necessary to achieve the best performance possible from the equipment available. The advanced age of the equipment currently in the inventories of these nations increases support requirements, and chronic shortages of spare parts lower their expected effectiveness. Many of the more modern systems are likely to be highly complex and difficult to maintain. Generally, the sophistication and intensity of training that potential adversaries provide their operators is considered well below U.S. standards. Furthermore, most of these countries have no experience training against an opponent like the United States. Another factor affecting the capabilities of potential adversaries is their military doctrine. No matter how effective their weapons may be, the centralized command and control that most potential adversaries exercise over the operations of their military forces further affects the effective and efficient use of the forces. Although potential adversaries possess capabilities that constitute a threat to the ability of U.S. air power to accomplish its objectives, the severity of these threats, particularly in relation to the formidable capability of U.S. forces to counter them, appears to be limited. Efforts by these countries to modernize their forces will likely be inhibited by declines in the post-Cold War arms market, national and international efforts to limit the proliferation of conventional arms, and the high cost of advanced weapons. Additionally, shortfalls in training, maintenance, logistics, and military doctrine further constrain the capabilities of potential adversaries. DOD’s plans for modernizing its air power forces call for spending several hundred billion dollars on new air power programs to further enhance U.S capabilities that are already formidable. These programs, which are likely to be a significant challenge to pay for, are proceeding even though DOD has not sufficiently assessed joint mission requirements. Without such assessments, the Secretary of Defense does not have the information needed to accurately assess the need for and priority of planned modernization programs. A definitive answer as to the necessity of planned investments is not possible without knowing how aggregate service capabilities meet joint war-fighting requirements. However, our past GAO work and information developed on our mission reviews suggest that some planned investments may not be worth the costs. For some programs, the payoff in added mission capability—considering the investment required and the limited needed capability added—is not clearly substantial, as required by the National Military Strategy. For others, the security environment and/or assumptions under which the programs were justified have changed. In other cases, there are viable and less costly alternatives to planned investments. Each military service has major acquisition programs to modernize its combat air power forces. Many of them were initiated to counter a global Soviet threat. These programs include not only combat aircraft but also programs to acquire long-range missiles to strike land targets; advanced weapons combat aircraft can use; theater missile defense forces; surveillance and reconnaissance assets; and command, control, and communications systems. Appendix III summarizes the costs of DOD’s major combat air power acquisition programs. If these programs proceed as planned, their total program costs, including allowances for inflation, are estimated to exceed $300 billion, about $60 billion of which has already been spent. Not included in these totals is the cost of the Joint Strike Fighter, the program that is likely to be the most costly of all. DOD has only published initial research, development, test and evaluation cost data on this program, which is projected to provide about 2,978 advanced joint strike-fighter aircraft for the Navy, Air Force, and Marine Corps beginning in the next decade. The Congressional Budget Office (CBO) estimates a total acquisition cost, based on DOD’s goals for the program, of $165 billion in 1997 dollars. The largest segment of DOD’s planned air power investments reflects the plan to replace aging fighter and attack aircraft. With the large defense buildup of the 1980s and the changed national security environment of the 1990s, in recent years DOD has significantly cut back on the procurement of such aircraft. These aircraft, which include the F-15s, F-16s, and F/A-18C/Ds for which production lines remain open, are highly capable aircraft. Nevertheless, DOD plans to replace them with more advanced and costly systems, but not necessarily on a one-for-one basis. The costs to replace the older model aircraft with new ones are projected to be quite substantial in the next decade. In fact, DOD estimates that it will spend about as much to procure combat aircraft in the next decade as it spent during the 1980s force buildup, even with the figures adjusted for inflation. DOD’s force modernization plans are based on several assumptions. First, DOD assumes that the defense budget top line will stop its decline in fiscal year 1997 and begin to rise and that funding for procurement will increase to $60.1 billion in fiscal year 2001. Second, DOD assumes it will achieve significant savings through base closures and other infrastructure reductions and “outsourcing” many support activities. Additionally, DOD assumes that savings will be realized from overhauling the defense acquisition system. There are reasons to be skeptical about the practicality of modernizing U.S. air power under these assumptions. An annual $60 billion procurement appropriation in fiscal year 2001 would be over 40 percent higher than that in the fiscal year 1997 budget. In each of its last three future years defense programs, DOD has postponed planned increases in its procurement budget request. As for infrastructure savings, our review of DOD’s 1996-2001 Future Years Defense Program identified only negligible net savings accruing over the program’s 6 years. Acquisition reform savings may also prove to be elusive. For example, although DOD expects to accrue substantial savings by reforming contract management and oversight requirements, we reported in April 1996 that initial results of such reforms indicate such savings may be minimal. In testimony before Congress in June 1996, senior DOD officials reported that military service and OSD officials reviewed the affordability of the three largest combat aircraft programs—the F-18E/F, F-22, and Joint Strike Fighter. According to the testimony, these officials determined that the overall planned investment in these programs was within historical norms and affordable within service priorities. Neither the Chairman of the Joint Chiefs of Staff nor CBO is as optimistic. The Chairman, in October 1995, said DOD’s tactical aircraft procurement plans call for much greater than expected resources in the out-years. CBO, in testimony before the Congress in June 1996, said its analysis of DOD’s fighter procurement plans suggest that they may not be affordable and that the programs will probably need to be scaled back. Using DOD goals for the three programs, CBO estimated that the Air Force and the Navy would need about $9.6 billion annually over the 2002-2020 period to buy fighter and attack aircraft, but may only have about $6.6 billion available to spend. The agency also described the aging of the fighter fleet as “worrisome,” suggesting that future leaders could have less flexibility in dealing with funding cuts. DOD makes decisions on the affordability of its modernization plans in an environment that encourages the “selling” of programs, along with undue optimism, parochialism, and other compromises of good judgment. Once DOD initiates major acquisition programs, such as the F-22, F/A-18E/F, and the Joint Strike Fighter, it has historically made a nearly irrevocable commitment to the program, unless the program experiences a catastrophe. Once begun, programs develop constituencies in the services, OSD, industry, the user community, and Congress—constituencies that give a momentum to programs and make their termination an option rarely considered by DOD. DOD has done little analysis to establish joint mission area requirements for some specific combat air power missions or to plan the aggregate capabilities needed by each of the services to meet those requirements. Studies that may provide such information on several key air power missions have been initiated but were not completed at the end of our review. Without such analyses, decisions on the need for new weapon systems, major modifications, and added capabilities evolve from a requirements generation process that encourages each service to maintain its own view of how its own capabilities should be enhanced to meet warfighting needs. In its May 1995 report, the Commission on Roles and Missions of the Armed Forces substantiated what our reviews of defense programs have found, that “each Service is fully engaged in trying to deliver to the CINCs what the Service views as the best possible set of its specific capabilities—without taking into account the similar capabilities provided by the other Services.” The analyses used to generate weapon system requirements for new acquisition programs are most often narrowly focused. They do not fully consider whether the capabilities of the other services to perform a given mission mitigate the need for a new acquisition or major modification. Significant limitations in study methodologies and the use of questionable assumptions that can result in overstated requirements are apparent in three DOD studies examining requirements for bombers in conventional conflicts. None of the studies, for example, assessed whether fighters or long-range missiles could accomplish the mission more cost-effectively than bombers. One of the studies, done by the Air Force and used by it to estimate and justify bomber requirements, assumed that only bombers would be available to strike time-critical targets during the first 5 days of a major regional conflict. This assumption seems to conflict with DOD planning guidance, which assumes that Air Force and Navy combat aircraft would arrive early enough in theater to attack targets at the outset of a major regional conflict. Under DOD’s requirements generation system, DOD components (principally the military services) are responsible for documenting deficiencies in current capabilities and opportunities to provide new capabilities in mission needs statements. If the potential material solution could result in a major defense acquisition program, the JROC is responsible for review and validation of the need. Validated needs statements are to be reviewed by the Defense Acquisition Board, which is responsible for identifying possible material alternatives and authorizing concept studies, if necessary. OSD’s Director of Program Analysis and Evaluation is responsible for reviewing any analyses of alternatives for meeting the validated need. While DOD has decision support systems, such as the requirements generation system and the planning, programming, and budgeting system, to assist the senior officials in making critical decisions, reviews like those done by the JROC and by OSD staff do not have the benefit of information on joint mission requirements and the aggregate capabilities of the services to meet those requirements. Therefore, such reviews can provide little assurance that there is a valid mission need, that force capabilities are being properly sized to meet requirements, and that the more cost-effective alternative has been identified. Additionally, because many weapon system modernization programs fall outside the major defense acquisition program definition, many service modernization initiatives are not validated by the JROC. DOD has defended its requirements generation system, saying the services have valid complementary requirements in many of the mission areas. In its opinion, the overlapping capabilities acquired add to the options available to U.S. leadership in a crisis and allow combatant commanders to tailor a military response to any contingency. We acknowledge that flexibility is important to respond to contingencies and that a certain amount of overlapping capability is needed. The question is whether, in the post-Cold War era, the United States needs or can afford to sustain current levels of redundancy. Advanced combat systems are not only costly to acquire, they are also expensive to operate and maintain. For example, DOD data indicates that the annual direct cost to operate and support an F-14 in the active inventory is about $2.2 million, an F-18 about $1.7 million, an F-15 about $3.2 million, and an F-16 about $2.2 million. These figures include the cost of the aircrews. The lack of information on joint mission needs and aggregate capabilities to meet those needs prevents a definitive answer as to whether DOD’s air power investment programs are justified. Based on our past reviews of individual air power systems and available information we collected on our six mission reviews, we believe that DOD is proceeding with some major investments without clear evidence that the programs are justified. When information is viewed more broadly, some programs appear to add only marginally to already formidable capabilities in some areas. Also, the changed security environment has lessened the need for some programs, and for others, viable, less costly alternatives appear to exist. Whether DOD’s planned investments represent the most cost-effective mix of air power assets to accomplish combat air power missions is unclear because past DOD assessments have largely skirted the question of sufficiency. However, available information suggests that existing capabilities in mission areas like interdiction, air-to-air combat, and close support are quite substantial even without further enhancements. In the interdiction mission area—the diverting, disrupting, delaying, or destroying of enemy forces before they can be used against U.S. forces—both current capabilities and those expected to be in place in 2002 are sufficient to hit all identified ground targets for the two major regional conflicts with considerable margin for error. Based on service data on current and planned interdiction capabilities and Defense Intelligence Agency and service threat assessments that identified enemy targets, the services already have at least 10 ways to hit 65 percent of the thousands of expected ground targets in two major regional conflicts. Some targets can be hit by 25 or more combinations of aircraft and weapons. In addition, service interdiction assets can provide 140 to 160 percent coverage for many types of targets. Despite this level of capability, the services are modifying current platforms and developing new weapon systems that will provide new and enhanced interdiction capabilities over the next 15 to 20 years at a total estimated cost of over $200 billion. These enhancements include the F/A-18E/F attack fighter, the ATACMS, major modifications to the B-1B bomber, more PGMs and improvements to aircraft and weapons, and acquisition of the Comanche armed reconnaissance helicopter. The Joint Strike Fighter, which is not included in the $200 billion estimate, will also provide interdiction capabilities. In the area of air-to-air combat—a critical mission to achieve and retain air superiority—over 600 combat-designated F-14 and F-15 fighter aircraft are dedicated to this mission. This number far exceeds the quantity and quality of fighter aircraft potential adversaries are projected to have. In addition, about 1,900 other combat designated multirole fighter aircraft, such as F-16s and F/A-18C/Ds, while not dedicated to air superiority missions, are very capable air superiority fighters. These aircraft could assist F-14s and F-15s to defeat enemy fighters before being used for other missions such as interdiction and close support. The capabilities of these fighter aircraft have also been enhanced extensively with the procurement of advanced weapons—particularly over 7,400 advanced medium range air-to-air missiles—and through continuing improvements to these weapons and to support platforms, such as airborne warning and control system aircraft, that help the fighters locate, identify, track, and attack enemy aircraft at great distances. Despite the unparalleled U.S. air-to-air capabilities, the Air Force plans to begin to replace its F-15s with 438 F-22 fighters in 2004, at an estimated average unit procurement cost of about $111 million. Release of long-lead production funding for the first lot of four F-22s is scheduled for fiscal year 1998. DOD expects that the F/A-18E/F and the Joint Strike Fighter will further add to U.S. air superiority capabilities. In the area of close support, the military services collectively possess a substantial inventory of weapon systems. These assets include five types of artillery, four types of attack helicopters, five types of fixed-wing aircraft, and 5-inch naval guns on cruisers and destroyers. DOD data indicates that in the year 2001, the U.S. military will have about 3,680 artillery systems, 1,850 attack helicopters, and 2,380 multirole fixed-wing aircraft that can provide close support as well as an unspecified number of naval 5-inch guns. The services plan to spend over $10.6 billion to further improve these capabilities between fiscal years 1996 and 2001, including major improvements to the Marine’s AV-8B close support aircraft and the Army’s Apache attack helicopter. Additional major acquisition programs that could further enhance close support capabilities include the F/A-18E/F strike fighter, the Joint Strike Fighter, and advanced munitions to attack ground targets. Given the current security environment, the extensive aggregate capabilities U.S. forces now possess may lessen the need to proceed with several key modernization programs as currently planned, since the capabilities being acquired are not urgently needed. The two most prominent examples are the planned production of F-22 air superiority fighters and modifications to the B-1 bombers. The Air Force is proceeding with plans to begin to acquire F-22 air superiority fighter aircraft in fiscal year 1999 and rapidly accelerate the pace of production to 48 aircraft per year. This is being done despite the services’ unmatched capabilities in air-to-air combat. The Air Force initiated the F-22 (advanced tactical fighter) program in 1981 to meet the projected threat of the mid-1990s. Since the F-22 entered engineering and manufacturing development, the severity of the projected threat in terms of quantities and capabilities has declined. Instead of confronting thousands of modern Soviet fighters, U.S. air forces now expect to confront potential adversaries that have few fighters with the capability to challenge the F-15, the current U.S. frontline fighter. Further, our analysis, reported in March 1994, indicated that the current inventory of F-15s can be economically maintained in a structurally sound condition until 2015 or later. Thus, the planned rapid increase in the rate of production to achieve initial operational capability in 2004 may be premature. Further, because F-22s are expected to be substantially more effective than F-15A-Ds, replacing the F-15A-Ds on a one-for-one basis, as currently planned, may be unnecessary. DOD estimates the average procurement cost of an F-22 will be about $111 million. In technical comments on a draft of this report, DOD said that several current or soon-to-be-fielded fighters are at parity with the F-15, but provided no further details. Although we recognize that several foreign aircraft being developed will be at rough parity with the F-15C, it is uncertain how quickly the aircraft will be produced. It is also unlikely that large quantities will be available and affordable by potential adversaries. In the case of the B-1B bomber, DOD needs to reexamine the need to keep this aircraft in the inventory and make several billion dollars of modifications to it. With the Cold War over and a reduction in the requirement for a large fleet of manned penetrating bombers that can deliver nuclear warheads in a global nuclear war, the B-1B will no longer be part of the U.S. nuclear force. The Air Force plans to modify its fleet of 95 B-1Bs to increase their conventional capability and sustainability. The B1Bs can currently carry only the 500-pound unguided, general-purpose bomb and cluster munitions; but after the modification, the B-1Bs will be able to carry more types of conventional ordnance. Several factors make the continued need for B-1Bs questionable. First, DOD considers its current capability sufficient to meet its requirement to interdict enemy targets identified in two major regional conflicts. Second, our analysis of Air Force targeting data indicates the modified B-1B would strike a very small percentage of the Air Force’s designated targets. Third, combatant command officials stated they would use far fewer B-1Bs than DOD cites as necessary. Fourth, other Air Force and Navy aircraft can launch the same munitions as the modified B-1B and others. Retiring the B-1B would increase U.S. forces’ dependence on other capabilities and the risk that some targets might not be hit as quickly. However, it is reasonable to expect that the targets assigned to the B-1 could be hit by other assets, including missiles such as ATACMS and Tomahawk. If DOD retired the Air Force’s 95 B-1Bs immediately, it could save almost $5.9 billion in budget authority over the next 5 years. These issues surrounding the B-1 are discussed in our report on the bomber force, which we expect to issue shortly. Analysis suggests that viable, less costly program alternatives may be available for some mission areas. The Navy’s planned purchase of 1,000 F/A-18E/F fighter aircraft at an estimated cost (as of Dec. 1995) of $81 billion is a case in point. The F/A-18E/F is intended to replace F/A-18C/D aircraft and to perform Navy and Marine Corps fighter escort, interdiction, fleet air defense, and close support missions. The aircraft’s origins are traceable to a 1988 study that identified upgrade options to the F/A-18C/D in performing these missions. However, the operational deficiencies in the F/A-18C/Ds that the Navy cited in justifying the F/A-18E/F either have not materialized as projected or can be corrected with nonstructural changes to the F/A-18C/D. Furthermore, the F/A-18E/F’s operational capabilities will only be marginally improved over the F/A-18C/D. In addition, while the F/A-18E/F will have increased range over the F/A-18C/D, the F/A-18C/D range will exceed the range required by the F/A-18E/F’s system specifications, and the F/A-18E/F’s range increase is achieved at the expense of its combat performance. Also, modifications to increase the F/A-18E/F’s payload have created a problem when weapons are released from the aircraft that may reduce the F/A-18E/F’s potential payload capability. Over the years, the Navy has improved the operational capabilities of the F/A-18C/D so that procuring more of them, rather than the new model F/A-18E/F aircraft, could be the most cost-effective approach to modernizing the Navy’s combat aircraft fleet in the mid-term. In this regard, additional upgrades, should they be needed, could be made to the F/A-18C/D, which would further improve its capabilities. These upgrades include a larger fuel tank for more range and strengthened landing gear to increase carrier recovery payload. Then, for the long term, the Joint Strike Fighter could be an alternative to the F/A-18E/F. The Joint Strike Fighter’s operational capabilities are projected by DOD to be equal or superior to the F/A-18E/F at a lower unit cost. The Army’s Comanche helicopter program provides a second example. In initiating the program, the Army sought a family of lightweight, multipurpose helicopters whose justification centered on practicality rather than the threat. The program was expected to inexpensively replace a fleet of Vietnam-era helicopters with new helicopters that would be up to 50 percent cheaper to operate and support. Within these economical confines, the new helicopters were to offer as good a technical performance as possible. Subsequently, however, specific requirements were developed, and the program emerged as it is today—a threat-based program to yield the next generation high-performance helicopter armed with 14 Hellfire missiles at a cost significantly higher than that of the Apache, the Army’s most advanced and costly helicopter. At least three alternative helicopters are available that we believe could, if upgraded, perform many of the Comanche’s missions. The Super Cobra, for example, is a twin-engine aircraft that the Marine Corps intends to equip with a four-blade rotor. It could perform armed reconnaissance and attack missions, and the new rotor will substantially improve its flight performance. A second alternative, the Longbow Apache, performs many of the missions that the Comanche is being developed to perform, and it was ranked higher for operational effectiveness than the basic Comanche in a 1990 DOD comparison of the aircraft. Finally, the Army’s Kiowa Warrior is a much improved version of the early model Kiowa, which can perform armed reconnaissance missions. Many users believe the lethality, low observability, deployability, and speed of the Kiowa Warrior, when combined with certain upgrades or doctrinal changes, would resolve many of the deficiencies the Comanche is expected to resolve. DOD continues to support both the F/A-18E/F and the Comanche programs. It said it is convinced that the fundamental reasons to develop the F/A-18E/F remain valid, but provided us no new data or information to support this. Regarding the Comanche, DOD believes it considered a wide range of alternatives before deciding on the Comanche. DOD’s positions are discussed in our reports on the F/A-18E/F and Army aviation modernization. DOD faces considerable funding challenges in modernizing its forces for the next century under its current plans. This is particularly so with fighter and attack aircraft, where the replacement of many aircraft scheduled for retirement in the next decade with costly new aircraft would require substantial resources. To ensure a viable combat-ready force in the future, DOD needs to deliberately consider the need for and priority of major investments in relation to joint requirements and aggregate service capabilities. Each represents a major long-term commitment and therefore requires close and continual examination to ensure a substantial payoff in added capability. The absence of joint mission area analyses makes it difficult to assess whether planned investments in air power modernization are warranted. Without a full understanding of joint requirements and aggregate service capabilities in each mission area, the Secretary of Defense does not have the information needed to make decisions about whether existing capabilities are sufficient to meet anticipated challenges or whether additional investments are justified. The fact that DOD is proceeding with modernization programs whose justifications do not, on the surface, appear to be compelling illustrates the need for continuing comprehensive mission area assessments. No program—regardless of the investment already made—should be considered irrevocable—but should be continually examined as circumstances and capabilities change. Although we have limited our illustrations in this chapter to major modernization programs, smaller programs would also benefit from mission area assessments. These assessments would help DOD determine the validity of the need for all types of new weapons investments as well as procurement quantities and also decide whether to reduce or retire existing assets. Through key legislation, Congress has sought to better integrate the capabilities of the military forces, provide for improved military advice to the Secretary of Defense apart from that provided by the military services, and strengthen the joint orientation of DOD. Although DOD has improved its joint orientation in many respects, the individual services continue to heavily influence defense decisions, particularly those related to investments in weapons. Stronger military advice from a joint perspective is needed if the Secretary is to objectively weigh the merits not only of combat air power but also of other defense programs. Although DOD has begun to assess selected warfighting capabilities from a joint perspective, this process is still evolving and has not yet led to any identifiable reductions in overlap and duplication among deployed air power forces. Nor has it led to specific platform proposals to deal with the high cost of recapitalizing DOD’s combat air power or specific proposals to transfer resources among services to meet higher priority needs. Better analytical tools and data are needed to improve joint warfighting assessments, and certain other obstacles must be overcome to reduce overlaps and achieve a stronger joint orientation. Collectively, the National Security Act of 1947 and the Goldwater-Nichols Department of Defense Reorganization Act of 1986 sought to better integrate the military forces, provide a channel for military advice to the Secretary of Defense apart from that of the individual services, and strengthen the joint orientation of the Department. Although DOD officials believe that the Department has improved its joint orientation in many respects, some of the underlying conditions that led to this legislation continue to surface. In many respects, the circumstances leading Congress to enact the National Security Act of 1947 parallel those surrounding the current debate over defense spending and modernization priorities. The military services’ lack of unified policy and planning during World War II, when the Army and Navy existed as separate military organizations reporting to the President, led to this major piece of defense legislation. This act created a National Military Establishment (later renamed the Department of Defense) to provide policy direction over the individual services and formally established the Joint Chiefs of Staff. In enacting this legislation, Congress sought to better integrate the distinct military capabilities of the services. The services subsequently agreed in 1948 on their respective functions. This agreement—termed the Key West Agreement—delineated services functions and was aimed at preventing unnecessary duplication. During this period, intense interservice competition for drastically shrinking defense resources erupted. The primary debate centered on whether both the newly created Air Force and the Navy should have roles in strategic bombing. Although the Air Force was assigned this role in 1948, the Navy soon initiated a major effort to build a super aircraft carrier to launch strategic bombers from its decks. Service control over combat aviation, airlift, guided missiles, and air defense weapons also generated much debate. The question of whether the nation needed or could afford all of the weapons the services proposed when defense resources were declining was central to these debates. Almost 40 years after the National Security Act sought to better integrate military capabilities, concerns over the need for a stronger joint orientation in the Department of Defense arose. Concerns about a perceived imbalance between service and joint advice ultimately led to the Goldwater-Nichols Department of Defense Reorganization Act of 1986 (Goldwater-Nichols). A major Senate Armed Services Committee report leading to the legislation pointed out that (1) the military services were not articulating DOD’s strategic goals or establishing priorities; (2) the military services dominated the force planning, programming, and budgeting process; (3) the Joint Chiefs of Staff system was not yielding meaningful recommendations on issues affecting more than one service, and the services retained an effective veto over nearly every Joint Chiefs action; and (4) DOD’s excessive functional orientation was inhibiting the integration of service capabilities along missions lines. This report concluded that inadequate integration could lead to unwarranted duplication, gaps in warfighting capability, and unrealistic plans. Various provisions of the Goldwater-Nichols legislation were directed at correcting these lingering problems. For example, it designated the Chairman of the Joint Chiefs of Staff as principal military adviser to the President, National Security Council, and Secretary of Defense. This provided a channel for military advice apart from the military services. The Chairman was also given new responsibilities designed to improve resource decision-making, including advising the Secretary on program recommendations and budget proposals developed by the military departments and other DOD components. Although DOD officials believe that progress has been made toward a stronger joint orientation within DOD, some of the key provisions of Goldwater-Nichols aimed at preventing unnecessary overlap and duplication have not had the intended effect. For example, to ensure reexamination of opportunities to reduce overlap and duplication, Goldwater-Nichols directed the Chairman, Joint Chiefs of Staff, to periodically report to the Secretary of Defense his recommendations on how the assigned functions of the armed services should be changed to avoid undue redundancy. The Defense Authorization Act for Fiscal Year 1993 added additional matters for the Chairman to consider in his report, including the extent to which the armed forces’ efficiency would be enhanced by the elimination or reduction of duplication in capabilities of DOD components. The Chairman completed two reviews—the most recent in 1993—but neither has led to significant changes in service roles, missions, and functions involving combat air power. Congressional dissatisfaction with the results of the Chairman’s reviews was one factor leading it to direct DOD to establish an independent commission to review the allocation of roles, missions, and functions among the armed forces and to recommend how they should be changed. The ensuing Commission on Roles and Missions of the Armed Forces reported its findings in May 1995. Once again, some of the same problems that had led to the Goldwater-Nichols legislation nearly 10 years before surfaced. For example, the Commission observed that the primary problems in weapon system acquisitions were traceable to inadequacies in the early phase of the requirements determination process. In the Commission’s view, the lack of a unified concept and analysis of warfighting needs was the critical underlying problem. The Commission concluded in its report that joint thought and action needed to become a compelling reality throughout DOD if the objectives of Goldwater-Nichols were to be realized. It recommended various actions to improve the management structures and decision support processes related to DOD’s requirements development and budgeting. A key conclusion in this regard was that the JROC and OSD staff needed to have a greater ability and willingness to address DOD needs in the aggregate. Accordingly, the Commission recommended that the JROC’s charter over joint requirements formulation be strengthened. It also recommended that DOD increase the technical and analytic capacity of the Joint Staff to better assist the Chairman and Vice Chairman. The Secretary of Defense requested more study of several key Commission proposals. Many of these studies were still underway or the results were under consideration within DOD at the completion of our review. Since the spring of 1994, the Chairman and Vice Chairman of the Joint Chiefs of Staff have taken steps to implement a process to assess U.S. warfighting needs and capabilities from a joint perspective. This process, which has centered around the JROC, is intended to provide the Chairman, and ultimately the Secretary of Defense and the Congress, with a joint view on program and budget issues. Both the Chairman and Vice Chairman recognized that the requirements generation and resource allocation processes depended heavily on each service’s assessment of its individual needs and priorities and that requirements had not been sufficiently reviewed from a joint perspective. In response to these concerns, the JROC’s role was expanded and a new process to assess warfighting capabilities from a joint mission perspective was established to support the JROC’s deliberations. While this process has contributed to changes that should improve joint warfighting, its role is still evolving, and its impact on air power programs and budgets has been limited. Between 1986 and 1994, the JROC served as the principal forum for senior military leaders to review and validate mission need statements for major defense acquisition programs. Approved mission statements are reviewed by the Defense Acquisition Board, which decides whether concept studies of solutions should be performed. In early 1994, the Chairman of the Joint Chiefs of Staff directed the Vice Chairman to expand the JROC charter to more fully support the Chairman in executing his statutory responsibilities. In addition to validating mission needs statements for major defense acquisition programs, Council responsibilities now include assisting the Chairman in (1) assessing joint warfighting capabilities, (2) assigning a joint priority among major weapons meeting valid requirements, and (3) assessing the extent to which the military departments’ program recommendations and budget proposals conform with established priorities. Under the Fiscal Year 1996 Defense Authorization Act, title 10 of U.S. Code was amended to include the JROC and its functions. The function of assigning priorities was revised and expanded through this legislation to include assisting the Chairman in identifying and assessing the priority of joint military requirements (including existing systems and equipment), ensuring that the assignment of priorities conforms to and reflects resource levels projected by the Secretary of Defense. Additionally, the JROC’s responsibilities were further expanded to include assisting the Chairman in considering the relative costs and benefits of alternatives to acquisition programs aimed at meeting identified military requirements. Figure 5.1 shows the JROC’s expanded responsibilities. The Fiscal Year 1996 Defense Authorization Act also designated the Chairman of the Joint Chiefs of Staff as the Chairman of the JROC. Other Council members include an Army, Air Force, and Marine Corps officer in the grade of general and a Navy admiral. The Chairman can delegate his functions only to the Vice Chairman of the Joint Chiefs of Staff, who for years has chaired the Council. In executing its responsibilities, JROC does not vote, but rather develops a consensus, or unanimity, in the positions it takes. To assist the JROC in advising the Chairman on joint warfighting capabilities, the joint warfighting capability assessment (JWCA) process was established in April 1994. Under this process, 10 assessment teams have been established in selected mission areas (see fig. 5.2). As sponsors of the JWCA teams, Joint Staff directorates coordinate the assessments with representatives from the Joint Staff, services, OSD, combatant commands (CINCs), and others as necessary. The teams are organized separate and apart from the Joint Staff and report to the JROC, which decides which issues they will assess. The intent is for the JWCA teams to continuously assess available information on their respective joint capability areas to identify opportunities to improve warfighting effectiveness. A key word is “assess.” The teams do not conduct analytical studies to develop new information to support the JROC. Rather, they assess available information and then develop and present briefings to the JROC. The JWCA teams produce only briefings, not reports or papers that lay out in detail the pros and cons of any options identified to address the issue(s) at hand. The Chairman uses the information from the JWCA team assessments to develop two key documents—the Chairman’s Program Recommendations, which contains his recommendations to the Secretary of Defense for consideration in developing the Defense Planning Guidance, and the Chairman’s Program Assessment, which contains alternative program recommendations and budget proposals for the Secretary’s consideration in refining the defense program and budget. In expanding the JROC process, including the establishment of the JWCA teams, it was envisioned that the JROC would be more than simply another military committee on which members participate strictly as representatives of their services. Recommendations coming from the JROC would not simply reflect the sum of each service’s requirements. Rather, the JROC, with the support of the JWCA process, would produce joint information the Chairman needs to meet his program review and assessment responsibilities and to resolve cross-service requirements issues, eliminate duplicative programs, and pursue opportunities to enhance the interoperability of weapon systems. The JWCA process has been in existence over 2 years and is still evolving. Representatives of both the Joint Staff and OSD believe that the process has led to more systematic and extensive discussions of joint issues among the top military leadership. They also believe that JWCA briefings have led to more informed and extensive discussions of joint issues within the JROC. Progress has been made on some interoperability issues as a result of the process. For example, in response to a JROC tasking, a JWCA team combined with Joint Staff elements to assess the interoperability of intelligence sensors and processors, fusion, and communication systems. According to the Chairman of the Joint Chiefs of Staff, the team’s recommendations will improve the interoperability among the individual services’ platforms so that data can be provided in a more timely manner to the battlefield. JWCA teams have also, on at least one occasion, been used in conjunction with other DOD elements to study key issues for the Secretary of Defense. In 1994, in response to a request of the Deputy Secretary of Defense, the JROC chairman formed a study group using representatives of three JWCA teams and several offices within OSD to examine issues related to precision strikes on targets and required intelligence support. The study group briefed the JROC on its findings and recommendations concerning databases, battlespace coverage, joint targeting doctrine, battle damage assessment, and other areas. A key recommendation was that intelligence, surveillance, and reconnaissance and command, control, and communications considerations be fully integrated early into the weapon system acquisition process. To implement this recommendation, the group devised revisions to DOD acquisition regulations that have been adopted. While the new JWCA process has raised the level of attention and sensitivity to joint issues, we found little evidence that the process is identifying unnecessary or overly redundant air power capabilities, confronting the challenge of modernizing the military’s air power, or helping establish priorities among competing programs. According to representatives from several JWCA teams, the teams have not been identifying tradeoffs among combat air power forces or programs to reduce redundancies. We were told that, unless specifically directed by the JROC, the JWCA teams are not empowered to develop such proposals. The primary example cited to us of an impact the JWCA teams had on reducing overlap among the services was DOD’s decision to retire the Air Force’s EF-111 radar jamming aircraft and consolidate the services’ airborne radar jamming capabilities into one platform—the Navy’s EA-6B. Documentation provided us, however, only indicates that the JWCA process became involved subsequent to the approval of the consolidation, when the Deputy Secretary of Defense asked the Vice Chairman of the Joint Chiefs of Staff to study the associated operational issues. The air superiority JWCA team performed the study, which included evaluating the performance of the EA-6B, developing an integrated operational concept for the consolidation, proposing a transition schedule, and assessing the requirement for upgrades to the EA-6B. Joint Staff officials told us JWCA teams have not examined the affordability of individual weapon systems in their assessments. Moreover, according to one Joint Staff official, attempts to raise these larger, more controversial issues have not led to specific JWCA assessment mandates from the JROC. For example, the JWCA teams elevated recapitalization and affordability issues to the JROC in December 1995. At these meetings, the issue of the affordability of acquiring high-priced aircraft, particularly after the turn of the century under projected budgets, was raised. According to Joint Staff officials, the top 20 most expensive acquisition programs—half of them aircraft—were presented to the JROC during these meetings. Although the JROC and the services conceptually agreed on the need to scrutinize the cost of tactical aircraft, the JROC has not taken any concrete actions or directed the JWCA teams to further study the affordability issue. Additionally, we found little evidence that the JROC, with the support of the JWCA process, has developed specific proposals to transfer resources from one service to another to meet higher priority needs. A review of Future Years Defense Program data also indicates no notable shifts in acquisition funding among the services between fiscal years 1994 and 2001. A key goal of the JROC, according to the Office of the Vice Chairman of the Joint Chiefs of Staff, is to enhance force capability by assisting the Chairman in proposing cross-service transfers of resources. Additionally, Joint Staff officials told us the JWCA teams have not developed proposals to shift funding among programs to reflect higher priorities from a joint perspective. In assessing the impact of the JROC and the JWCA process on combat air power, we examined two important ultimate outputs of the process—the Chairman’s Program Assessment and Program Recommendations to the Secretary of Defense. Under its broadened mandate, the JROC has been made a focal point for addressing joint warfighting needs. It is expected to support the Chairman in advising the Secretary by making specific programmatic recommendations that will, among other things, lead to increased joint warfighting capability and reduce unnecessary redundancies and marginally effective systems, within existing budget levels. However, in reviewing the Chairman’s 1994 and 1995 program assessments and 1995 program recommendations, we found little to suggest that this type of advice is being provided. The documents did not offer specific substantive proposals to reduce or eliminate duplication among existing service systems or otherwise aid in addressing the problem of funding recapitalization. In fact, the Chairman’s 1995 Program Assessment indicates an inability on the Chairman’s part, at least at that point, to propose changes in service programs and budgets. While the Chairman expressed serious concerns in his assessment about the need for and cost of recapitalizing warfighting capabilities and said that the power of joint operations allows for the identification of programs to be canceled or reduced, his advice was to defer to the services to make such choices. DOD must overcome several obstacles that have inhibited JWCA teams and others that try to assess joint mission requirements and the services’ aggregate capabilities to fulfill combat missions. In addition to scarce information on joint mission requirements and aggregate service capabilities discussed in chapter 4, impediments include (1) weak analytical tools and databases to assist in-depth joint mission area analyses, (2) weaknesses in DOD’s decision making support processes, and (3) the services’ resistance to changes affecting their programs. DOD officials acknowledge that current analytical tools, such as computer models and war games used in warfighting analyses, should be improved if they are to be effectively used to analyze joint warfighting. They told us these tools often do not accurately represent all aspects of a truly joint force, frequently focus on either land or naval aspects, and often do not consider the contribution of surveillance and reconnaissance and command and control assets to the warfighter. Some models are grounded in Cold War theory and must be augmented with other evaluations to minimize their inherent deficiencies. DOD representatives and analysts from the military operations research community also observe that there are serious limitations in the data to support analyses of joint capabilities and requirements. Presently, anytime DOD wants to study joint requirements, a database must be developed. Concerns then arise over whether the databases developed and used are consistent, valid, and accurate. Efforts have been made in the past to collect joint data and develop appropriate models for analyzing joint warfare. These efforts, however, fell short, as there was not a consistent, compelling need across enough of the analytic community to do the job adequately. A current major initiative aimed at improving analytical support is the design and development of a new model—JWARS—that will simulate joint warfare. JWARS will seek to overcome past shortcomings and will include the contributions of surveillance and reconnaissance and command, control, and communication assets to the warfighter. This initiative was developed as part of DOD’s joint analytic model improvement program because of the Secretary of Defense’s concern that current models used for warfare analysis are no longer adequate to deal with the complex issues confronting senior decisionmakers. Under this program, DOD will upgrade and refine current warfighting models to keep them usable until a new generation of models to address joint warfare issues can be developed. The new models are intended to help decisionmakers assess the value of various force structure mixes. As part of this broad initiative, DOD also intends to develop a central database for use in mission area studies and analyses. In addition to problems with models and data, the Roles and Missions Commission identified a need to improve analytical capabilities in both the Office of the Secretary of Defense and the Joint Staff. Commission staff said that there has been too much reliance on the services for analytical support and that the Joint Staff should improve its abilities to look broadly across systems and services in conducting analyses. Recognizing the need for more information and analytical support, the Joint Staff has contracted for studies to support the JWCA assessments. According to Joint Staff data, by the end of fiscal year 1996, DOD will have awarded about $24 million in contracts to support the teams. In its May 1995 report, the Roles and Missions Commission faulted the decision support processes DOD uses to develop requirements and make resource allocation decisions. It cited a need for the JROC and OSD staff to have a greater ability to address DOD needs in the aggregate. The Commission also presented ideas and recommendations to improve DOD’s decision-making processes to enable management to better develop requirements from a joint perspective. These included (1) changes to the information support network that would enable DOD to assess forces and capabilities by mission area and (2) changes to the weapons acquisition process that would enable joint warfighting concerns to be considered when requirements for new weapons are first being established. These and many other Commission proposals were still under assessment within DOD at the completion of our review. DOD, in its comments on a draft of our report, indicated that it believes the OSD and Organization of the Joint Chiefs of Staff oversight of service programs and budgets is quite rigorous. Several OSD program analysts we interviewed did not share this view. They described the oversight as very limited and the JWCA process as contributing very little to programming and budgeting decisions. Roles and Missions Commission staff also stressed to us that, based on their years of experience in OSD, the Secretary needs stronger independent advisory support from the OSD staff. DOD has reduced its force structure and terminated some weapon programs to reflect changes in the National Military Strategy and reduced defense budgets. But further attempts to cancel weapon programs and reduce unnecessary overlaps and duplications among forces are likely to generate considerable debate and resistance within DOD. Because such initiatives can threaten service plans and budgets, the tendency has been to avoid debates involving tradeoffs among the services’ systems. The potential effects of program reductions or cancellations on careers, the distribution of funds to localities, jobs, and the industrial base also serve as disincentives for comprehensive assessments and dialogue on program alternatives. The Chairman’s 1995 Program Assessment indicates the difficulty the Chairman has had in identifying programs and capabilities to cancel or reduce. While the Chairman recognized that the increasing jointness of military operations should permit additional program cancellations or reductions, he noted that the Joint Chiefs—despite the added support of the JROC and the JWCA process—had been unable to define with sufficient detail what should not be funded. The Chairman recommended that the Secretary of Defense look to the military services to identify programs that can be slowed or terminated. He said for this to happen, however, the services would have to be provided incentives. The Chairman recommended that the Secretary return to the services any savings they identify for application toward priority recapitalization or readiness and personnel programs. Joint Staff officials indicated that the Chairman’s reluctance to propose changes to major service programs may be attributable to the need for the Chairman to be a team builder and not be at odds with the service chiefs over their modernization programs. Adoption of the Chairman’s proposal could lead the services to reduce or eliminate programs and otherwise more efficiently operate their agencies, including reducing infrastructure costs. However, it is difficult to appreciate how these unilateral decisions by the services will provide for the most efficient and effective use of defense resources to meet the needs of the combatant commanders. It should be remembered that studies and hearings leading up to the Goldwater-Nichols legislation observed that the need for the Joint Chiefs of Staff to reach consensus before making decisions clearly inhibited decisions that could integrate service capabilities along mission lines. The need to address this problem was one of the primary motivations behind Goldwater-Nichols. While DOD acknowledges the need to consider joint requirements and the services’ aggregate capabilities in defense planning, programming, and budgeting, its decision support systems have not yielded the information needed from a joint perspective to help the Secretary make some very difficult decisions. Measures intended to improve the advice provided by the Chairman of the Joint Chiefs of Staff have met with limited success. The Secretary does not have enough comprehensive information on joint mission requirements and aggregate capabilities to help him establish recapitalization priorities and reduce duplications and overlaps in existing capabilities without unacceptable effects on force capabilities. The Chairman would be in a better position to provide such advice if joint warfighting assessments examined such issues. Efforts are underway that could provide the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and other decisionmakers with improved information to make the difficult force structure and modernization choices needed. However, the desire to reach consensus with the service chiefs—or in the case of the JROC the practice of reaching consensus among its members—could present a formidable obstacle to efforts by DOD officials to make significant changes to major modernization programs and to identify and eliminate unnecessary or overly redundant capabilities. The Secretary of Defense and the Chairman of the Joint Chiefs of Staff need to be more willing to take decisive actions on modernization programs that do not provide a clearly substantial payoff in force capability. During the Cold War, the military services invested hundreds of billions of dollars to develop largely autonomous combat air power capabilities, primarily to prepare for a global war with the Soviet Union. The Air Force acquired bombers to deliver massive nuclear strikes against the Soviets and fighter and attack aircraft for conventional and theater-nuclear missions in the major land theaters, principally Europe. The Navy built an extensive carrier-based aviation force focused on controlling the seas and projecting power into the maritime flanks of the Soviet Union. The Army developed attack helicopters to provide air support to its ground troops. The Marine Corps acquired fighter and attack aircraft and attack helicopters to support its ground forces in their areas of operation. While the United States ended up with four essentially autonomous air forces with many similar capabilities, each also largely operated within its own warfighting domains. Today, there is no longer a clear division of labor among aviation forces based on where they operate or what functions they carry out. Although many of the long-range bombers can still be used to deliver nuclear weapons, the air power components of the four services are now focused on joint conventional operations in regional conflicts and contingency operations. Most of the likely theaters of operation are small enough that, with available refueling support, all types of aircraft can reach most targets. And while the number of combat aircraft has been reduced, the reductions have been largely offset by an expansion in the types of assets and capabilities available to the combatant commanders. For example, (1) a larger percentage of the combat aircraft force can now perform multiple missions; (2) key performance capabilities of combat aircraft, such as night fighting, are being significantly enhanced; and (3) the inventories of advanced long-range missiles and PGMs are growing and improving, adding to the arsenal of weapons and options available to attack targets. Moreover, the continuing integration of service capabilities in such areas as battlefield surveillance; command, control, and communications; and targeting should enable force commanders to further capitalize on the aggregate capabilities of the services. DOD has not been adequately examining its combat air power force structure and its modernization plans and programs from a joint perspective. The forces of the services are increasingly operating jointly and in concert with allies in a regional versus a global environment. However, DOD’s decision support systems do not provide sufficient information from a joint perspective to enable the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and other decisionmakers to prioritize programs, objectively weigh the merits of new air power investments, and decide whether current programs should continue to receive funding. It is true that the overlapping and often redundant air power capabilities of the current force structure provide combatant commanders with operational flexibility to respond to any circumstance. The question is whether, in the post-Cold War era, the United States needs, or can afford, the current levels of overlap and redundancy. This is not easily answered because DOD has not fully examined the joint requirements for key warfighting missions areas or the aggregate capabilities of the services to meet those requirements. From our reviews of interdiction, air-to-air combat, and close support of ground forces, it is evident that U.S. capabilities are quite substantial even without further enhancement. For the interdiction mission, our analysis and the analysis of others showed that the services have more than enough capability to hit identified ground targets for the two major regional conflicts used in force planning. Planned investments in some cases may be adding little needed military capability at a very high cost. While it may be desirable for DOD to scale back its air power modernization plans and reduce overlapping capabilities, the challenging question is, how. Such courses of action require tough choices, particularly when the military strategy is to win quickly and decisively in two nearly simultaneous major regional conflicts. Even with a more comprehensive understanding of joint requirements and the capabilities of the services to meet those requirements, the Secretary will likely continue to find it difficult to make decisions that could increase warfighting risks and affect programs, careers, jobs, and the industrial base. But without such an understanding, there may be little hope that these tough decisions will be made. The need for improved joint warfighting information is recognized in DOD and provided much of the stimulus for the establishment of the joint warfighting capability assessment teams. A critical underlying need of these teams, or any assessment process, is objective comprehensive cross-service and cross-mission studies and analyses of joint requirements for doing key warfighting missions and the aggregate capabilities of the services to meet those requirements. Such analyses are very demanding and may require a considerable amount of military judgment. Nonetheless, they are vital input for better understanding how much capability is needed to fulfill air power missions and what is the most cost-effective mix of air power assets to meet the needs of the combatant commanders within DOD’s budgets. DOD has initiated several broad studies that should provide added information. These include a deep attack/weapons mix study that includes interdiction and close support operations, a reconnaissance force mix study, and an electronic warfare mission area analysis. DOD has not routinely reviewed the justification for weapon modernization programs based on their contribution to the aggregate capabilities of the military to meet mission requirements. In our May 1996 report on DOD interdiction capabilities and modernization plans, we recommended that the Secretary of Defense do such reviews. DOD agreed with our recommendation. Based on our review of other missions, such reviews are needed for other key mission areas as well. Because many assets contribute to more than one mission area, cross-mission analyses will need to be part of the process. The urgent need for such assessments is underscored by the reality that significant outlays will be required in the next decade to finance DOD’s combat air power modernization programs as currently planned. Over the past few years, we have reviewed the Department’s major air power modernization programs—the F/A-18E/F, the F-22, the Comanche, and the B-1B bomber modification programs—within the context of the post-Cold War security environment. Our work leading to this culminating report has served to reinforce the theme of these earlier assessments—namely, that DOD should revisit the program justifications for these programs because the circumstances and assumptions upon which they were based have changed. Although extensive resources have already been invested in these programs, past investment decisions should not be considered irreversible but rather should be considered in the light of new information. The extensive long-term financial commitment needed to fund all of these programs makes it imperative that these key programs—and possibly others—be reconsidered since the future viability of U.S. combat air power could be at risk if it is not smartly modernized within likely budgets. To ensure a viable, combat ready force in the future, the Secretary of Defense will need to make decisions in at least two critical areas—how best to reduce unneeded duplication and overlap in existing capabilities and how to recapitalize the force in the most cost-effective manner. To make such decisions, the Secretary must have better information coming from a joint perspective. Accordingly, we recommend that the Secretary of Defense, along with the Chairman of the Joint Chiefs of Staff, develop an assessment process that yields more comprehensive information in key mission areas. This can be done by broadening the current joint warfare capabilities assessment process or developing an alternative mechanism. To be of most value, such assessments should be done on a continuing basis and should, at a minimum, (1) assess total joint war-fighting requirements in each mission area; (2) inventory aggregate service capabilities, including the full range of assets available to carry out each mission; (3) compare aggregate capabilities to joint requirements to identify shortages or excesses, taking into consideration existing and projected capabilities of potential adversaries and the adequacy of existing capabilities to meet joint requirements; (4) determine the most cost-effective means to satisfy any shortages; and (5) where excesses exist, assess the relative merits of retiring alternative assets, reducing procurement quantities, or canceling acquisition programs. The assessments also need to examine the projected impact of investments, retirements, and cancellations on other mission areas since some assets contribute to multiple mission areas. Because the Chairman is to advise the Secretary on joint military requirements and provide programmatic advice on how best to provide joint warfighting capabilities within projected resource levels, the assessment process needs to help the Chairman determine program priorities across mission lines. To enhance the effectiveness of the assessments, we also recommend that the Secretary of Defense and the Chairman decide how best to provide analytical support to the assessment teams, ensure staff continuity, and allow the teams latitude to examine the full range of air power issues. DOD partially concurred with our recommendations, and while it said it disagreed with many of our findings, most of that disagreement centered on two principal points: (1) the Secretary of Defense is not receiving adequate advice, particularly from a joint perspective, to support decision-making on combat air power programs, and (2) ongoing major combat aircraft acquisition programs lack sufficient analysis of needs and capabilities. DOD said many steps had been taken in recent years to improve the extent and quality of joint military advice and cited the JWCA process as an example. It said the Secretary and Deputy Secretary receive comprehensive advice on combat air power programs through DOD’s planning, programming, and budgeting system and systems acquisition process. The Department’s response noted that both OSD and the Organization of the Joint Chiefs of Staff carefully scrutinize major acquisition programs and that joint military force assessments and recommendations are provided. DOD acknowledged that the quality of analytical support can be improved but believes that the extent of support available has not been insufficient for decision-making. We agree that steps have been taken to provide improved joint advice to the Secretary. We also recognize that DOD decision support systems provide information for making planning, programming, and budgeting decisions on major acquisition programs. We do not, however, believe the information is sufficiently comprehensive to support resource allocation decisions across service and mission lines. Much of the information is developed by the individual services and limited in scope. Only a very limited amount of information is available on joint requirements for performing missions, such as interdiction and close support, and on the aggregate capabilities available to meet those requirements. DOD’s initiation of the deep attack weapons mix study and, more recently, a study to assess close support capabilities, suggest that it is, in fact, seeking more comprehensive information about cross-service needs and capabilities as our recommendation suggests. While joint warfighting capability assessment teams have been established, DOD has not been using these teams to identify unnecessary or overly redundant combat air power capabilities among the services; nor has the Department used the teams to help develop specific proposals or strategies for recapitalizing U.S. air power forces, a major combat air power issue identified by the Chairman of the Joint Chiefs of Staff. Information on issues such as redundancies in capabilities and on recapitalization alternatives, developed from a joint warfighting perspective, would be invaluable to decisionmakers in allocating defense resources among competing needs to achieve maximum force effectiveness. With regard to the analyses of needs and capabilities behind combat air power weapons acquisition programs, we recognize that the services conduct considerable analyses to identify mission needs and justify new weapons program proposals. These analyses, however, are not based on assessments of the aggregate capabilities of the services to perform warfighting missions, nor does DOD routinely review service modernization proposals and programs from such a perspective. The Commission on Roles and Missions of the Armed Forces made similar observations. More typically service analyses tend to justify specific modernization programs by showing the additional capabilities they could provide rather than assess the cost-effectiveness of alternative means of meeting an identified need. A 1995 study done at the request of the Chairman of the JROC, also identified this as a problem. The study team found that analyses done to support JROC decisions frequently concentrate only on the capability of the DOD component’s proposed system to fill stated gaps in warfighter needs. Potential alternatives are given little consideration. Additionally, as pointed out in Chapter 4 of this report, under DOD’s requirements generation process, only program proposals that meet DOD’s major defense acquisition program criteria are reviewed and validated by the JROC. Many service modernization proposals and programs are not reviewed as they do not meet this criteria. | GAO reviewed the Department of Defense's (DOD) plans to modernize its combat air capabilities, focusing on whether DOD has sufficient information from a joint perspective to: (1) prioritize its air power programs; (2) objectively weigh the merits of new program investments; and (3) decide whether existing programs should receive continued funding. GAO found that: (1) although DOD believes that its modernization plans are affordable, it faces a major challenge in attempting to fund the services' air modernization programs; (2) DOD has not sufficiently assessed joint mission requirements or compared these requirements to the services' aggregate capabilities; (3) DOD is proceeding with some major air modernization programs without clear evidence that the programs are justified; (4) the services plan to acquire numerous advanced weapons systems over the next 15 to 20 years to enhance their interdiction capabilities despite the availability of viable, less costly alternatives; (5) reductions in combat aircraft inventories have been largely offset by improvements in night-fighting and targeting capabilities and increases in advanced long-range missile inventories; (6) although potential adversaries possess capabilities that could threaten U.S. air power, the severity of these threats appears to be limited; and (7) DOD has taken steps to enhance information on joint combat requirements, but these efforts have had little impact in identifying duplication in existing air combat capabilities. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The HUBZone program was established by the HUBZone Act of 1997 to stimulate economic development through increased employment and capital investment by providing federal contracting preferences to small businesses in economically distressed communities or HUBZone areas. The types of areas in which HUBZones may be located are defined by law and consist of the following: Qualified census tracts. A qualified census tract has the meaning given the term by Congress for the low-income-housing tax credit program. The list of qualified census tracts is maintained and updated by the Department of Housing and Urban Development (HUD). As currently defined, qualified census tracts have either 50 percent or more of their households with incomes below 60 percent of the area median gross income or have a poverty rate of at least 25 percent. The population of all census tracts that satisfy one or both of these criteria cannot exceed 20 percent of the area population. Qualified census tracts may be in metropolitan or nonmetropolitan areas. HUD designates qualified census tracts periodically as new decennial census data become available or as metropolitan area definitions change. Qualified nonmetropolitan counties. Qualified nonmetropolitan counties are those that, based on the most recent decennial census data, are not located in a metropolitan statistical area and in which 1. the median household income is less than 80 percent of the nonmetropolitan state median household income; 2. the unemployment rate is not less than 140 percent of the average unemployment rate for either the nation or the state (whichever is lower); or 3. a difficult development area is located. The definition of a difficult development area is similar to that of a qualified census tract in that it comes from the tax code’s provision for the low-income-housing tax credit program. For the low-income-housing tax credit program, difficult development areas can be located in both metropolitan and nonmetropolitan counties; however, for the HUBZone program, they can only be located in nonmetropolitan counties in Alaska, Hawaii, and the U.S. territories and possessions. Qualified Indian reservations. A HUBZone-qualified Indian reservation has the same meaning as the term Indian Country as defined in another federal statute, with some exceptions. These are all lands within the limits of any Indian reservation, all dependent Indian communities within U.S. borders, and all Indian allotments. In addition, portions of the state of Oklahoma qualify because they meet the Internal Revenue Service’s definition of “former Indian reservations in Oklahoma.” Redesignated areas. Redesignated areas are census tracts or nonmetropolitan counties that no longer meet the economic criteria but remain eligible until after the release of the 2010 decennial census data. Base closure areas. Areas within the external boundaries of former military bases that were closed by the Base Realignment and Closure Act (BRAC) qualify for HUBZone status for a 5-year period from the date of formal closure. In order for a firm to be certified to participate in the HUBZone program, it must meet the following criteria: the company must be small by SBA size standards; the company must be at least 51 percent owned and controlled by U.S. citizens; the company’s principal office—the location where the greatest number of employees perform their work—must be located in a HUBZone; and at least 35 percent of the company’s full-time (or full-time equivalent) employees must reside in a HUBZone. As of February 2008, 12,986 certified firms participated in the HUBZone program (see fig. 1). Over 4,200 HUBZone firms obtained approximately $8.1 billion in federal contracts in fiscal year 2007. A certified HUBZone firm is eligible for federal contracting benefits, including “sole source” contracts, set-aside contracts, and a price evaluation preference. A contracting officer can award a sole source contract to a HUBZone firm if, among other things, the officer does not have a reasonable expectation that two or more qualified HUBZone firms will submit offers and the anticipated award price of the proposed contract, including options, will not exceed $5.5 million for manufacturing contracts or $3.5 million for all other contracts. If a contracting officer has a reasonable expectation that at least two qualified HUBZone firms will submit offers and an award can be made at a fair market price, the contract shall be awarded on the basis of competition restricted to qualified HUBZone firms. Contracting officers also can award a contract to a HUBZone firm through “full and open competition.” In these circumstances, HUBZone firms are given a price evaluation preference of up to 10 percent if the apparent successful offering firm is not a small business. That is, the price offered by a qualified HUBZone firm shall be deemed as lower than the price offered by another firm (other than another small business) if the price is not more than 10 percent higher than the price offered by the firm with the lowest offer. As of October 1, 2000, all federal agencies were required to meet the HUBZone program’s contracting goals. Currently, the annual federal contracting goal for HUBZone small businesses is 3 percent of all prime contract awards—contracts awarded directly by an agency. In the HUBZone Act of 1997, Congress increased the overall federal contracting goal for small businesses from 20 percent to 23 percent to address concerns that the HUBZone contracting requirement would reduce federal contracts for non-HUBZone small businesses. Each year, SBA issues a small business goaling report that documents each department’s achievement of small business contracting goals. SBA administers the HUBZone program, and the HUBZone program office at SBA headquarters is responsible for certifying firms, publishing a list of HUBZone-certified firms, monitoring certified firms to ensure continuing eligibility, and decertifying firms that no longer meet eligibility requirements. A HUBZone liaison at each of SBA’s 68 district offices is responsible for conducting program examinations—investigations that verify the accuracy of information supplied by firms during the certification process, as well as current eligibility status. HUBZone liaisons also handle program marketing and outreach to the economic development and small business communities. Federal agencies are responsible for trying to meet the HUBZone contracting goal and for enforcing the contracts awarded to HUBZone firms. Each federal agency has an Office of Small and Disadvantaged Business Utilization (OSDBU), or an equivalent office, that helps the agency employ special contracting programs and monitor the agency’s overall small business and special contracting goals. In addition to the HUBZone program, SBA has other contracting assistance programs. The 8(a) program is a business development program for firms owned by citizens who are socially and economically disadvantaged. SBA provides technical assistance, such as business counseling, to these firms. While the 8(a) program offers a broad range of assistance to socially and economically disadvantaged firms, the Small Disadvantaged Business (SDB) program is intended only to convey benefits in federal procurement to disadvantaged businesses. All 8(a) firms automatically qualify for SDB certification, and federal agencies are subject to an annual SDB contracting goal of 5 percent of all federal contracting dollars. Small businesses also can be certified as service- disabled veteran-owned, and the contracting goal for these firms is 3 percent of all federal contracting dollars. SBA relies on federal law to identify qualified HUBZone areas, but its HUBZone map is inaccurate and the economic characteristics of HUBZone areas vary widely. The map that SBA uses to publicize HUBZone areas contains ineligible areas and has not been updated to include eligible areas. As a result, ineligible small businesses have participated in the program, and eligible businesses have not been able to participate. A series of statutory changes has resulted in an increase in the number and types of HUBZone areas. HUBZone program officials noted that such an expansion could diffuse (or limit) the economic benefits of the program. We found that different types of HUBZone areas varied in the degree to which they could be characterized as economically distressed (as measured by indicators such as poverty and unemployment rates). In recent years, amendments to the HUBZone Act and other statutes have increased the number and type of HUBZone areas. The original HUBZone Act of 1997 defined a HUBZone as any area within a qualified census tract, a qualified nonmetropolitan county, or lands within the boundaries of a federally recognized Indian reservation. Qualified census tracts were defined as having the meaning given the term in the tax code at the time— areas in which 50 percent or more of the households had incomes below 60 percent of the area median gross income. Qualified nonmetropolitan areas were counties with low median household income or high levels of unemployment. However, subsequent legislation revised the definitions of the original categories and expanded the HUBZone definition to include new types of qualified areas (see fig. 2). A 2000 statute (1) defined Indian reservation to include lands covered by the Bureau of Indian Affairs’ phrase Indian Country and (2) allowed all lands within the jurisdictional areas of an Oklahoma Indian tribe to be eligible for the program. The 2000 statute also amended the HUBZone area definition to allow census tracts or nonmetropolitan counties that ceased to be qualified to remain qualified for a further 3-year period as “redesignated areas.” Also in 2000, Congress changed the definition of a qualified census tract in the tax code by adding a poverty rate criterion; that is, a qualified census tract could be either an area of low income or high poverty. A 2004 statute revised the definition of redesignated areas to permit them to remain qualified until the release date of the 2010 census data. In that same statute, Congress determined that areas within the external boundaries of former military bases closed by BRAC would qualify for HUBZone status for a 5-year period from the date of formal closure. In addition, Congress revised the definition of qualified nonmetropolitan counties to permit eligibility based on a county’s unemployment rate relative to either the state or the national unemployment rate, whichever was lower. Finally, in 2005, Congress expanded the definition of qualified nonmetropolitan county to include “difficult development areas” in Alaska, Hawaii, and the U.S. territories. These areas have high construction, land, and utility costs relative to area median income. Subsequent to the statutory changes, the number of HUBZone areas grew from 7,895 in calendar year 1999 to 14,364 in 2006. As shown in figure 2, the December 15, 2000, change to the definition of a qualified census tract—a provision of the low-income-housing tax credit program— resulted in the biggest increase in the number of qualified HUBZone areas. SBA’s data show that, as of 2006, there were 12,218 qualified census tracts, 1,301 nonmetropolitan counties, 651 Indian Country areas, 82 BRAC areas, and 112 difficult development areas (see fig. 3). SBA program staff employ no discretion in identifying HUBZone areas because the areas are defined by federal statute, but SBA has not always designated these areas correctly on the SBA Web map. To identify and map HUBZone areas, SBA relies on a mapping contractor and data from other executive agencies (see fig. 4). When a HUBZone designation changes or more current data become available, SBA alerts the contractor. The contractor retrieves the data from the designated federal agencies, such as HUD, the Bureau of Labor Statistics (BLS), and the Census Bureau. Most HUBZone area designation data are publicly available (and widely used by researchers and the general public), with the exception of the Indian Country designation. Once the changes to the HUBZone areas are mapped, the contractor sends the maps back to SBA. SBA performs a series of checks to ensure that the HUBZone areas are mapped correctly and then the contractor places the maps and associated HUBZone area information on SBA’s Web site. Essentially, the map is SBA’s primary interface with small businesses to determine if they are located in a HUBZone and can apply for HUBZone certification. SBA officials stated that they primarily rely on firms to identify HUBZone areas that have been misidentified or incorrectly mapped. Based on client input, SBA estimated that from 1 percent to 2 percent of firms searching the map as part of the application process report miscodings. SBA’s mapping contractor researches these claims each month. During the course of our review, we identified two problems with SBA’s HUBZone map. First, the map includes some areas that do not meet the statutory definition of a HUBZone area. As noted previously, counties containing difficult development areas are only eligible in their entirety for the HUBZone program if they are not located in a metropolitan statistical area. However, we found that SBA’s HUBZone map includes 50 metropolitan counties as difficult development areas that do not meet this or any other criteria for inclusion as a HUBZone area. Nearly all of these incorrectly designated HUBZone areas are in Puerto Rico. When we raised this issue with SBA officials, they told us they had provided a definition of difficult development areas that was consistent with the statutory language used by the agency’s mapping contractor in December 2005. However, according to SBA, the mapping contactor failed to properly follow SBA’s guidance when adding difficult development areas to the map in 2006. According to SBA officials, the agency is in the process of acquiring additional mapping services and will immediately re-evaluate all difficult development areas once that occurs. As a result of these errors, ineligible firms have obtained HUBZone certification and received federal contracts. As of December 2007, there were 344 certified HUBZone firms located in ineligible areas in these 50 counties. Further, from October 2006 through March 2008, federal agencies obligated about $5 million through HUBZone set-aside contracts to 12 firms located in these ineligible areas. Second, while SBA’s policy is to have its contractor update the HUBZone map as needed, the map has not been updated since August 2006. Since that time, additional data such as unemployment rates from BLS have become available. According to SBA officials, the update was delayed because SBA awarded the contract for management of the HUBZone system to a new prime contractor, which is still in the process of establishing a relationship with the current mapping subcontractor. Although SBA officials told us they are working to have the contractor update the mapping system, no subcontract was in place as of May 2008. While an analysis of the 2008 list of qualified census tracts showed that the number of tracts had not changed since the map was last updated, our analysis of 2007 BLS unemployment data indicated that 27 additional nonmetropolitan counties should have been identified on the map. Because firms are not likely to receive information on the HUBZone status of areas from other sources, firms in the 27 areas would have believed from the map that they were ineligible to participate in the program and could not benefit from contracting incentives that certification provides. Having an out-of-date map led SBA, in one instance, to mistakenly identify a HUBZone area. When asked by a congressman to research whether Jackson County, Michigan, qualified in its entirety as a HUBZone area, an SBA official used a manual process to determine the county’s eligibility because the map was out of date. The official mistakenly concluded that the county was eligible. After that determination, the congressman publicized Jackson County’s status, but SBA, after further review, had to rescind its HUBZone status 1 week later. Had the information been processed under the standard mapping procedures, the mapping system software would have identified the area as a metropolitan county and noted that it did not meet the criteria to be a HUBZone, as only nonmetropolitan counties qualify in their entirety. In this case, the lack of regular updates led to program officials using a manual process that resulted in an incorrect determination. Qualified HUBZone areas experience a range of economic conditions. HUBZone program officials told us that the growth in the number of HUBZone areas is a concern for two reasons. First, they stated that expansion can diffuse the impact or potential impact of the program on existing HUBZone areas. Specifically, they noted that as the program becomes less targeted and contracting dollars more dispersed, the program could have less of an impact on individual HUBZone areas. We recognize that establishing new HUBZone areas can potentially provide economic benefits for these areas by helping them attract firms that make investments and employ HUBZone residents. However, diffusion—less targeting to areas of greatest economic distress—could occur with such an expansion. Based on 2000 census data, about 69 million people (out of 280 million nationwide) lived in the more than 14,000 HUBZones. Considering that HUBZone firms are encouraged to locate in HUBZone areas and compete for federal contracts (thus facilitating employment and investment growth), the broad extent of eligible areas can lessen the very competitive advantage that businesses may rely on to thrive in economically distressed communities. Second, while HUBZone program officials thought that the original designations resulted in HUBZone areas that were economically distressed, they questioned whether some of the later categories—such as redesignated and difficult development areas— met the definition of economic distress. To determine the economic characteristics of HUBZones, we compared different types of HUBZone areas and analyzed various indicators associated with economic distress. We found a marked difference in the economic characteristics of two types of HUBZone areas: (1) census tracts and nonmetropolitan counties that continue to meet the eligibility criteria and (2) the redesignated areas that do not meet the eligibility criteria but remain statutorily eligible until the release of the 2010 census data. For example, approximately 60 percent of metropolitan census tracts (excluding redesignated tracts) had a poverty rate of 30 percent or more, while approximately 4 percent of redesignated metropolitan census tracts had a poverty rate of 30 percent or more (see fig. 5). In addition, about 75 percent of metropolitan census tracts (excluding redesignated tracts) had a median household income that was less than 60 percent of the metropolitan area median household income; in contrast, about 10 percent of redesignated metropolitan census tracts met these criteria. (For information on the economic characteristics of nonmetropolitan census tracts, see app. III.) Similarly, we found that about 46 percent of nonmetropolitan counties (excluding redesignated counties) had a poverty rate of 20 percent or more, while 21 percent of redesignated nonmetropolitan counties had a poverty rate of 20 percent or more (see fig. 6). Also, about 54 percent of nonmetropolitan counties (excluding redesignated counties) had a median housing value that was less than 80 percent of the state nonmetropolitan median housing value; in contrast, about 32 percent of redesignated counties met these criteria. Overall, difficult development areas appear to be less economically distressed than metropolitan census tracts and nonmetropolitan counties (see fig. 7). For example, 6 of 28 difficult development areas (about 21 percent) had poverty rates of 20 percent or more. In contrast, about 93 percent of metropolitan census tracts (excluding redesignated areas) and about 46 percent of nonmetropolitan counties (excluding redesignated areas) met this criterion. See appendix III for additional details on the economic characteristics of Indian Country areas and additional analyses illustrating the economic diversity among qualified HUBZone areas. In expanding the types of HUBZone areas, the definition of economic distress has been broadened to include measures that were not in place in the initial statute. For example, one new type of HUBZone area—difficult development areas—consists of areas with high construction, land, and utility costs relative to area income, and such areas could include neighborhoods not normally considered economically distressed. As a result, the expanded HUBZone criteria now allow for HUBZone areas that are less economically distressed than the areas that were initially designated. Such an expansion could diffuse the benefits to be derived from steering businesses to economically distressed areas. The policies and procedures upon which SBA relies to certify and monitor firms provide limited assurance that only eligible firms participate in the HUBZone program. Internal control standards for federal agencies state that agencies should document and verify information that they collect on their programs. However, SBA obtains supporting documentation from firms in limited instances and rarely conducts site visits to verify the information that firms provide in their initial application and during periodic recertifications—a process through which SBA can monitor firms’ continued eligibility. In addition, SBA does not follow its own policy of recertifying all firms every 3 years—which can lengthen the time a firm goes unmonitored and its eligibility is unreviewed—and has a backlog of more than 4,600 firms to recertify. Furthermore, SBA largely has not met its informal goal of 60 days for removing firms deemed ineligible from its list of certified firms. We found that of the more than 3,600 firms that were proposed for decertification in fiscal years 2006 and 2007, more than 1,400 were not processed within 60 days. As a result, there is an increased risk that ineligible firms may participate in the program and have opportunities to receive federal contracts based on HUBZone certification. To certify and recertify HUBZone firms, SBA relies on data that firms enter in its online application system; however, the agency largely does not verify the self-reported information. The certification and recertification processes are similar. Firms apply for HUBZone certification using an online application system, which employs automated logic steps to screen out ineligible firms based on the information entered on the application. For example, firms enter information such as their total number of employees and number of employees that reside in a HUBZone. Based on this information, the system then calculates whether the number of employees residing in a HUBZone equals 35 percent or more of total employees, the required level for HUBZone eligibility. HUBZone program staff review the applications to determine if more information is required. While SBA’s policy states that supporting documentation normally is not required, it notes that agency staff may request and consider such documentation, as necessary. No specific guidance or criteria are provided to program staff for this purpose; rather, the policy allows staff to determine what circumstances warrant a request for supporting documentation. In determining whether additional information is required, HUBZone program officials stated that they generally consult sources such as firms’ or state governments’ Web sites that contain information on firms incorporated in the state. In addition, HUBZone program officials stated that they can check information such as a firm’s address using the Central Contractor Registration (CCR) database. According to HUBZone program officials, they are in the process of obtaining Dun and Bradstreet’s company information (such as principal address, number of employees, and revenue) to cross-check some application data. While these data sources are used as a cross- check, the data they contain are also self-reported. The number of applications submitted by firms grew by more than 40 percent from fiscal year 2000 to fiscal year 2007, and the application approval rate varied. For example, as shown in table 1, 1,527 applications were submitted in fiscal year 2000, and SBA approved 1,510 applications (about 99 percent). In fiscal year 2007, 2,204 applications were submitted, and SBA approved 1,721 (about 78 percent). Of the 2,204 applications submitted in fiscal year 2007, 383 (about 17 percent) were withdrawn. Either the firms themselves or SBA staff can withdraw an application if it is believed the firm will not meet program requirements. HUBZone program staff noted that they withdraw applications for firms that could, if they made some minor modifications, be eligible. Otherwise, firms would have to wait 1 year before they could reapply. The remaining 100 applications (about 5 percent) submitted in fiscal year 2007 were declined because the firms did not meet the HUBZone eligibility requirements. See appendix IV for details on the characteristics of current HUBZone firms. To ensure the continued eligibility of certified HUBZone firms, SBA requires firms to resubmit an application. That is, to be recertified, firms re-enter information in the online application system, and HUBZone program officials review it. In 2004, SBA changed the recertification period from an annual recertification to every 3 years. According to HUBZone program officials, they generally limit their reviews to comparing resubmitted information to the original application. The officials added that significant changes from the initial application can trigger a request for additional information or documentation. If concerns about eligibility are raised during the recertification process, SBA will propose decertification or removal from the list of eligible HUBZone firms. Firms that are proposed for decertification can challenge that proposed outcome through a due-process mechanism. SBA ultimately decertifies firms that do not challenge the proposed decertification and those that cannot provide additional evidence that they continue to meet the eligibility requirements. For example, as shown in table 2, SBA began 3,278 recertifications in fiscal year 2006 and had completed decertification of 1,699 firms as of January 22, 2008. Although SBA does not systematically track the reasons why firms are decertified, HUBZone program officials noted that many firms do not respond to SBA’s request for updated information. We discuss this issue and others related to the timeliness of the recertification and decertification processes later in this report. We found that SBA verifies the information it receives from firms in limited instances. In accord with SBA’s policy, HUBZone program staff request documentation from firms and conduct site visits when they feel it is warranted. The HUBZone Certification Tracking System does not readily provide information on the extent to which SBA requests documentation from firms or conducts site visits; therefore, we conducted reviews of applications and recertifications. Specifically, we reviewed the 125 applications and 15 recertifications submitted or begun in September 2007. For the applications submitted in September 2007, HUBZone program staff requested additional information but not supporting documentation for 10 (8 percent) of the applications; requested supporting documentation for 45 (36 percent) of the applications; and conducted one site visit. After reviewing supporting documentation for the 45 applications, SBA ultimately approved 19 (about 42 percent). Of the remaining 26 applications, 21 (about 47 percent of the 45 applications) were withdrawn by either SBA or the firm, and 5 (about 11 percent of the 45 applications) were denied by SBA. For the 15 firms that SBA began recertifying in September 2007, HUBZone program staff requested information and documentation from 2 firms and did not conduct any site visits. In the instances when SBA approved an application without choosing to request additional information or documentation (about 50 percent of our application sample), HUBZone program staff generally recorded in the HUBZone system that their determination was based on the information in the application and that SBA was relying on the firm’s certification that all information was true and correct. In requesting additional information, HUBZone staff asked such questions as the approximate number of employees and type of work performed at each of the firm’s locations. When requesting supporting documentation, HUBZone staff requested items such as copies of driver’s licenses or voter’s registration cards for the employees that were HUBZone residents and a rental/lease agreement or deed of trust for the principal office. Internal control standards for federal agencies and programs require that agencies collect and maintain documentation and verify information to support their programs. The documentation also should provide evidence of accurate and appropriate controls for approvals, authorizations, and verifications. For example, in addition to automated edits and checks, conducting site visits to physically verify information provided by firms can help control the accuracy and completeness of transactions or other events. According to HUBZone program officials, they did not more routinely verify the information because they generally relied on their automated processes and status protest process. For instance, they said they did not request documentation to support each firm’s application because the application system employs automated logic steps to screen out ineligible firms. For example, as previously noted, the application system calculates the percentage of a firm’s employees that reside in a HUBZone and screens out firms that do not meet the 35 percent requirement. But the automated application system would not necessarily screen out applicants that submit false information to obtain a HUBZone certification. HUBZone program officials also stated that it is not necessary to conduct site visits of HUBZone firms because firms self-police the program through the HUBZone status protest process. However, relatively few protests have occurred in recent years. In addition, officials from SBA’s HUBZone office did not indicate a reliable mechanism HUBZone firms could use to identify information that could be used in a status protest. For example, it is unclear how a firm in one state would know enough about a firm in another state, such as its principal office location or employment of HUBZone residents, to question its qualified HUBZone status. Rather than obtaining supporting documentation during certification and recertification on a more regular basis, SBA waits until it is conducting examinations of a small percentage of firms to consistently request supporting documentation. The 1997 statute that created the HUBZone program authorized SBA to conduct program examinations of HUBZone firms. Since fiscal year 2004, SBA’s policy has been to conduct program examinations on 5 percent of firms each year. Over the years, SBA has developed a standard process for conducting these examinations. SBA uses three selection factors to determine which firms will be examined each year. After firms have been selected for a program examination, SBA field staff request documentation from them to support their continued eligibility for the program. For instance, they request documents such as payroll records to evaluate compliance with the requirement that 35 percent or more of employees reside in a HUBZone and documents such as organization charts and lease agreements to document that the firm’s principal office is located in a HUBZone. After reviewing this documentation, the field staff recommend to SBA headquarters whether the firm should remain in the program. As shown in table 3, in fiscal years 2004 through 2006 nearly two-thirds of firms SBA examined were decertified, and in fiscal year 2007, 430 of 715 firms (about 60 percent) were decertified or proposed for decertification. The number of firms decertified includes firms that the agency determined to be ineligible, and were decertified, and firms that requested to be decertified. Because SBA limits its program examinations to 5 percent of firms each year, firms can be in the program for years without being examined. For example, we found that 2,637 of the 3,348 firms (approximately 79 percent) that had been in the program for 6 years or more had not been examined. In addition to performing program examinations on a limited number of firms, HUBZone program officials rarely conduct site visits during program examinations to verify a firm’s information. When reviewing the 11 program examinations that began in September 2007, we found that SBA did not conduct any site visits to verify the documentation provided. As a result of SBA’s limited application of internal controls when certifying and monitoring HUBZone firms, the agency has limited assurances that only eligible firms participated in the program. By not obtaining documentation and conducting site visits on a more routine basis during the certification process, SBA cannot be sure that only eligible firms are part of the program. And while SBA’s examination process involves a more extensive review of documentation, it cannot be relied upon to ensure that only eligible firms participate in the program because it involves only 5 percent of firms in any given year. As previously noted, since 2004, SBA’s policies have required the agency to recertify all HUBZone firms every 3 years. Recertification presents another opportunity for SBA to review information from firms and thus help monitor program activity. However, SBA has failed to recertify 4,655 of the 11,370 firms (more than 40 percent) that have been in the program for more than 3 years. Of the 4,655 firms that should have been recertified, 689 have been in the program for more than 6 years. SBA officials stated that the agency lacked sufficient staff to comply with its recertification policy. According to SBA officials, staffing levels have been relatively low in recent years. In fiscal year 2002, the HUBZone program office, which is located in SBA headquarters in Washington, D.C., had 12 full-time equivalent staff. By fiscal year 2006, the number had dropped to 8 and remained at that level as of March 2008. Of the 8, 3 conduct recertifications on a part-time basis. SBA hired a contractor in December 2007 to help conduct recertifications, using the same process that SBA staff currently use. According to the contract, SBA estimates that the contractor will conduct 3,000 recertifications in fiscal year 2008; in subsequent years, SBA has the option to direct the contractor to conduct, on average, 2,450 recertifications annually for the next 4 years. Although SBA has contracted for these additional resources, the agency lacks specific time frames for eliminating the backlog. As a result of the backlog, the periods during which some firms go unmonitored and are not reviewed for eligibility are longer than SBA policy allows, increasing the risk that ineligible firms may be participating in the program. While SBA policies for the HUBZone program include procedures for certifications, recertifications, and program examinations, they do not specify a time frame for processing decertifications—which occur subsequent to recertification reviews or examinations and determine that firms are no longer eligible to participate in the HUBZone program. If SBA suspects that a firm no longer meets standards or fails to respond to notification of a recertification or program examination, SBA makes a determination and, if found ineligible, removes the firm from its list of certified HUBZone firms. Although SBA does not have written guidance for the decertification time frame, the HUBZone program office negotiated an informal (unwritten) goal of 60 days with the SBA Inspector General (IG) in 2006. In recent years, SBA ultimately decertified the vast majority of firms proposed for decertification but, as shown in table 4, has not met its 60- day goal consistently. From fiscal years 2004 through 2007, SBA failed to resolve proposed decertifications within its goal of 60 days for more than 3,200 firms. However, SBA’s timeliness has improved. For example, in 2006, SBA did not resolve proposed decertifications in a timely manner for more than 1,000 firms (about 44 percent). In 2007, over 400 (or about 33 percent) were not resolved in a timely manner. SBA staff acknowledged that lags in processing decertifications were problematic and attributed them to limited staffing. SBA plans to use its contract staff to address this problem after the backlog of recertifications is eliminated. In addition, we and the SBA Inspector General found that SBA does not routinely track the reasons why firms are decertified. According to SBA officials, a planned upgrade to the HUBZone data system will allow SBA to track this information. While SBA does not currently track the specific reasons why firms are decertified, our analysis of HUBZone system data shows that firms were primarily decertified because firms either did not submit the recertification form or did not respond to SBA’s notification. According to HUBZone officials, firms may fail to respond because they are no longer in business or are no longer interested in participating in the program. But firms also may not be responding because they no longer meet the eligibility requirements. Tracking the various reasons why firms are decertified could help SBA take appropriate action against firms that misrepresent their HUBZone eligibility status. While we were unable to determine how many firms were awarded HUBZone contracts after they were proposed for decertification, our analysis showed that 90 of the firms proposed for decertification in fiscal years 2004 through 2007 received HUBZone set-aside dollars after being decertified. However, some of these firms may have been awarded the contracts before they were decertified. As a consequence of generally not meeting its 60-day goal, lags in the processing of decertifications have increased the risk of ineligible firms participating in the program. SBA has taken limited steps to assess the effectiveness of the HUBZone program. While SBA has a few performance measures in place that provide some data on program outputs, such as the number of certifications and examinations, the measures do not directly link to the program’s mission. SBA has plans for assessing the program’s effectiveness but has not devoted resources to implement such plans. Although Congress’s goal is for agencies to award 3 percent of their annual contracting dollars to qualifying firms located in HUBZones, most federal agencies did not meet the goal for fiscal year 2006—the total for federal agencies reached approximately 2 percent. Factors such as conflicting guidance on how to consider the various small business programs when awarding contracts and a lack of HUBZone firms with the necessary expertise may have affected the ability of federal agencies to meet their HUBZone goals. While SBA has some measures in place to assess the performance of the HUBZone program, the agency has not implemented its plans to conduct an evaluation of the program’s benefits. According to the Government Performance and Results Act (GPRA) of 1993, federal agencies are required to identify results-oriented goals and measure performance toward the achievement of their goals. We have previously reported on the attributes of effective performance measures. We noted that for performance measures to be useful in assessing program performance, they should be linked or aligned with program goals and cover the activities that an organization is expected to perform to support the intent of the program. We reviewed SBA’s performance measures for the HUBZone program and found that although the measures related to the core activity of the program (providing federal contracting assistance), they were not directly linked to the program’s mission of stimulating economic development and creating jobs in economically distressed communities. According to SBA’s fiscal year 2007 Annual Performance Report, the three performance measures were: number of small businesses assisted (which SBA defines as the number of applications approved and the number of recertifications processed), annual value of federal contracts awarded to HUBZone firms, and number of program examinations completed. The three measures provide some data on program activity, such as the number of certifications and program examinations and contract dollars awarded to HUBZone firms. However, they do not directly measure the program’s effect on firms (such as growth in employment or changes in capital investment) or directly measure the program’s effect on the communities in which the firms are located (for instance, changes in median household income or poverty levels). While SBA’s performance measures for the HUBZone program do not link directly to the program’s mission, the agency has made attempts to assess the effect of the program on firms. In fiscal years 2005 and 2006, SBA conducted surveys of HUBZone firms. According to SBA data on the surveys, HUBZone firms responding to the 2005 survey reported they had hired a total of 11,461 employees as a result of their HUBZone certification, and HUBZone firms responding to the 2006 survey reported they had hired a total of 12,826 employees (see table 5). Based on the firms that responded to the 2005 survey, the total capital investment increase in HUBZone firms as a result of firm certification was approximately $523.8 million as of August 31, 2005. As of September 12, 2006, the total capital investment increase based on firms responding to the 2006 survey was approximately $372.6 million. SBA did not conduct this survey in fiscal year 2007, but officials stated that they planned to conduct a similar survey during fiscal year 2008. However, the survey results have several limitations. For instance, the 2005 and 2006 surveys appear to have had an approximate response rate of 33 percent and 27 percent, respectively, which may increase the risk that survey results are not representative of all HUBZone firms. It also is unclear whether the survey results were reliable because SBA did not provide detailed guidance on how to define terms such as capital investment, which may have led to inconsistent responses. Finally, while the surveys measured increased employment and capital investment by firms—which provided limited assessment of, and could be linked to, the program’s effect on individual firms—they did not provide data that showed the effect of the program on the communities in which they were located. Since the purpose of the HUBZone program is to stimulate economic development in economically distressed communities, useful performance measures should be linked to this purpose. Similarly, the Office of Management and Budget (OMB) noted in its 2005 Program Assessment Rating Tool (PART) that SBA needed to develop baseline measures for some of its HUBZone performance measures and encouraged SBA to focus on more outcome-oriented measures that more effectively evaluate the results of the program. Although OMB gave the HUBZone program an assessment rating of “moderately effective,” it stated that SBA had limited data on, and had conducted limited assessments of, the program’s effect. The assessment also emphasized the importance of systematic evaluation of the program as a basis for programmatic improvement. The PART assessment also documented plans that SBA had to conduct an analysis of the economic impact of the HUBZone program on a community-by-community basis using data from the 2000 and 2010 decennial census. SBA stated its intent to assess the program’s effect in individual communities by comparing changes in socioeconomic data over time. Variables that the program office planned to consider included median household income, average educational levels, and residential/commercial real estate values. Additionally, in a mandated 2002 report to Congress, SBA identified potential measures to more effectively assess the HUBZone program. These measures included assessing full- time jobs created in HUBZone areas and the larger areas of which they were a part, the amount of investment-related expenditures in HUBZone areas and the larger areas of which they were a part, and changes in construction permits and home loans in HUBZone areas. While SBA has recognized the need to assess the results of the HUBZone program, SBA officials indicated that the agency has not devoted resources to implement either of these strategies for assessing the results of the program. Yet by not evaluating the HUBZone program’s benefits, SBA lacks key information that could help it better manage the program and inform Congress of its results. We also conducted site visits to four HUBZone areas (Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California) to better understand to what extent stakeholders perceived that the HUBZone program generated benefits. For all four HUBZone areas, the perceived benefits of the program varied, with some firms indicating they have been able to win contracts and expand their firms and others indicating they had not realized any benefits from the program. Officials representing economic development entities varied in their knowledge of the program, with some stating they lacked information on the program’s effect that could help them inform small businesses of its potential benefits. (See appendix V for more information on our site visits.) Although contracting dollars awarded to HUBZone firms have increased since fiscal year 2003—when the statutory goal of awarding 3 percent of federally funded contract dollars to HUBZone firms went into effect— federal agencies collectively still have not met that goal. According to data from SBA’s goaling reports, for fiscal years 2003 through 2006, the percentage of prime contracting dollars awarded to HUBZone firms increased but was still about one-third short of the statutory goal for fiscal year 2006 (see table 6). In fiscal year 2006, 8 of 24 federal agencies met their HUBZone goals. Of the 8 agencies, 4 had goals higher than the 3 percent requirement and were able to meet the higher goals. Of the 16 agencies not meeting their HUBZone goal, 10 awarded less than 2 percent of their small-business- eligible contracting dollars to HUBZone firms. According to SBA’s most recent guidance on the goaling process, agencies are required to submit a report explaining why goals were not met, along with a plan for corrective action. Federal agencies may not have met their HUBZone goals for various reasons, which include uncertainty about how to properly apply federal contracting preferences. For instance, federal contracting officials reported facing conflicting guidance about the order in which the various small business programs—the HUBZone program, the 8(a) program, and the service-disabled veteran-owned small business program—should be considered when awarding contracts. The 2007 Report of the Acquisition Advisory Panel concluded that contracting officers need definitive guidance on the priority for applying the various small business contracting preferences to specific acquisitions. The report stated that each program has its own statutory and regulatory requirements. It also noted that both SBA and the Federal Acquisition Regulatory Council (FAR Council) have attempted to interpret these provisions but that their respective regulations conflict with each other. According to the report, in general, SBA’s regulations provide for parity among most of the programs and give discretion to the contracting officer by stating that the contracting officer should consider setting aside the requirement for 8(a), HUBZone, or service-disabled veteran-owned firms’ participation before considering setting aside the requirement as a small business set-aside. However, according to the report, the FAR currently conflicts with SBA’s regulations by providing that, before deciding to set aside an acquisition for small businesses, HUBZone firms, or service-disabled veteran-owned small firms, the contracting officer should review the acquisition for offering under the 8(a) program. Officials at three of the four agencies we interviewed (Commerce, DHS, and SSA) regarding the awarding of contracts to small businesses stated that contracting officers occasionally faced uncertainty when applying the guidelines on awarding contracts under these programs. In March 2008, a proposal to amend the FAR was published with the purpose of ensuring that the FAR clearly reflects SBA’s interpretation of the Small Business Act and SBA’s interpretation of its regulations about the order of precedence that applies when deciding whether to satisfy a requirement through award under these various types of small business programs. Among other things, the proposed rule is intended to make clear that there is no order of precedence among the 8(a), HUBZone, or service-disabled veteran-owned small business programs. The proposed rule stated that SBA believes that, among other factors, progress in fulfilling the various small business goals should be considered in making a decision as to which program is to be used for an acquisition. Federal contracting officials from the four agencies also explained that it was sometimes difficult to identify HUBZone firms with the required expertise to fulfill contracts. For example, DHS acquisition officials stated that market research that their contracting officers conducted sometimes indicated there were no qualified HUBZone firms in industries in which DHS awarded contracts. Specifically, a contracting officer in the U.S. Coast Guard’s Maintenance and Logistics Command explained that for contracts requiring specialized types of ship-repair work, the Coast Guard sometimes could not find sufficient numbers of HUBZone firms with the capacity and expertise to perform the work in the time frame required. SSA officials also stated that the agency awards most of its contracts to firms in the information technology industry and that contracting officers at times have had difficulty finding qualified HUBZone firms operating in this industry due to the amount of infrastructure and technical expertise required. Officials representing the Defense Threat Reduction Agency (an agency within DOD) also stated they often have difficulty finding qualified HUBZone firms that can fulfill their specialized technology needs. Lastly, Commerce officials explained that a review of the top 25 North American Industry Classification System (NAICS) codes under which the agency awarded contracts in fiscal year 2007 showed that fewer than 100 HUBZone firms operated in 13 of these 25 industries, including 5 industries that had fewer than 5 firms operating. They noted that these small numbers increased the difficulty of locating qualified HUBZone firms capable of meeting Commerce’s requirements. We did not validate the statements made by these federal contracting officials related to the difficulty they face in awarding contracts to HUBZone firms. Finally, according to contracting officers we interviewed, the availability of sole-source contracting under SBA’s 8(a) program could make the 8(a) program more appealing than the HUBZone program. Through sole-source contracting, contracting officers have more flexibility in awarding contracts directly to an 8(a) firm without competition. According to U.S. Coast Guard contracting officers we interviewed, this can save 1 to 2 months when trying to award a contract. Sole-source contracts are available to HUBZone program participants but only when the contracting officer does not have a reasonable expectation that two or more qualified HUBZone firms will submit offers. Contracting officers we interviewed regarding HUBZone sole-source contracts stated that this is rarely the case. In fiscal year 2006, $5.8 billion (about 44 percent) of all dollars obligated to small business 8(a) firms were awarded through 8(a) sole- source contracts. In contrast, about 1 percent of the contracts awarded to HUBZone firms were HUBZone sole-source contracts. Because agencies can count contracting dollars awarded to small businesses under more than one socioeconomic subcategory, it can be difficult to identify how many contract dollars firms received based on a particular designation. Small businesses can qualify for contracts under multiple socioeconomic programs. For example, if a HUBZone certified firm was owned by a service-disabled veteran, it could qualify for contracts set aside for HUBZone firms, as well as for contracts set aside for service-disabled veteran-owned businesses. The contracting dollars awarded to this firm would count toward both of these programs’ contracting goals. We reviewed FPDS-NG data on contracts awarded to HUBZone firms in fiscal year 2006. We found that approximately 45 percent of contracts awarded to HUBZone firms were not set aside for any particular socioeconomic program (see fig. 8). The next largest percentage, about 23 percent, were 8(a) sole-source contracts awarded to HUBZone firms that also participated in SBA’s 8(a) business development program. These firms did not have any competitors for the contracts awarded. HUBZone set-aside contracts, or contracts for which only HUBZone firms can compete, accounted for about 11 percent of the dollars awarded to HUBZone firms. This ability to count contracts toward multiple socioeconomic goals makes it difficult to determine how HUBZone certification may have played a role in winning a contract, especially when considering the limited amount of contract dollars awarded to HUBZone firms relative to the HUBZone goal. It can also make it more difficult to isolate the effect of HUBZone program status on economic conditions in a community. The map contained on the HUBZone Web site is the primary means of disseminating HUBZone information. The map offers small businesses an easy and readily accessible way of determining whether they can apply for HUBZone certification. However, those positive attributes have been undermined because the map reflects inaccurate and out-of-date information. In particular, as of May 2008, SBA’s HUBZone map included 50 ineligible areas and excluded 27 eligible areas. As a result, ineligible small businesses have been able to participate in the program, while eligible businesses have not been able to participate. By working with its contractors to eliminate inaccuracies and more frequently updating the map, SBA will help ensure that only eligible firms have opportunities to participate in the program. Although SBA relies on federal law to identify HUBZone areas, statutory changes over time have resulted in more areas being eligible for the program. Specifically, revisions to the statutory definition of HUBZone areas since 1999 have nearly doubled the number of areas and created areas that can be characterized as less economically distressed than areas designated under the original statutory criteria. While establishing new HUBZone areas could provide economic benefits to these new areas, as the program becomes less targeted and contracting dollars more dispersed, the program could have less of an effect on individual HUBZone areas. Such an expansion could diffuse the benefits that could be derived by steering businesses to economically distressed areas. Given the potential for erosion of the intended economic benefits of the program, further assessment of the criteria used to determine eligible HUBZone areas, in relation to overall program outcomes, may be warranted. The mechanisms that SBA uses to certify and monitor firms provide limited assurance that only eligible firms participate in the program. SBA does not currently have guidance on precisely when HUBZone program staff should request documentation from firms to support the information reported on their application, and it verifies information reported by firms at application or during recertification in limited instances. Also, SBA does not follow its policy of recertifying all firms every 3 years. Further, SBA lacks a formal policy on how quickly it needs to make a final determination on decertifying firms that may no longer be eligible for the program. From fiscal years 2004 through 2007, SBA failed to resolve proposed decertifications within its informal goal of 60 days for more than 3,200 firms. More routinely obtaining supporting documentation upon application and conducting more frequent site visits would represent a more efficient and consistent use of SBA’s limited resources. It could help ensure that firms applying for application are truly eligible, thereby reducing the need to spend a substantial amount of resources during any decertification process. In addition, an SBA effort to consistently follow its current policy of recertifying firms every 3 years, and to formalize and adhere to a specific time frame for decertifying firms, would help prevent ineligible firms from obtaining HUBZone contracts. By not evaluating the HUBZone program’s benefits, SBA lacks key information that could help it better manage the program and inform Congress of its results. SBA has some measures to assess program performance, but they are not linked to the program’s mission and thus do not measure the program’s effect on the communities in which HUBZone firms are located. While SBA identified several strategies for assessing the program’s effect and conducted limited surveys, it has not devoted resources to conduct a comprehensive program evaluation of the program’s effect on communities. We recognize the challenges associated with evaluating the economic effect of the program, such as isolating the role that HUBZone certification plays in obtaining federal contracts and generating benefits for communities. Because contract dollars awarded to firms in one small business program also could represent part of the dollars awarded in other programs, contract dollars awarded to HUBZone firms at best represent a broad indicator of program influence on a community’s economic activity. In addition, the varying levels of economic distress among HUBZone areas can further complicate such an evaluation. Despite these challenges, completing an evaluation would offer several benefits to the agency and the HUBZone program, including determining how well it is working across various communities, especially those that suffer most from economic distress. Such an evaluation is particularly critical in light of the expansion in the number of HUBZone areas, the potential for erosion of the intended economic benefits of the program from such expansion, and the wide variation in the economic characteristics of these areas. To improve SBA’s administration and oversight of the HUBZone program, we recommend that the Administrator of SBA take the following actions: Take immediate steps to correct and update the map that is used to identify HUBZone areas and implement procedures to ensure that the map is updated with the most recently available data on a more frequent basis. Develop and implement guidance to more routinely and consistently obtain supporting documentation upon application and conduct more frequent site visits, as appropriate, to ensure that firms applying for certification are eligible. Establish a specific time frame for eliminating the backlog of recertifications and ensure that this goal is met, using either SBA or contract staff, and take the necessary steps to ensure that recertifications are completed in a more timely fashion in the future. Formalize and adhere to a specific time frame for processing firms proposed for decertification in the future. Further develop measures and implement plans to assess the effectiveness of the HUBZone program that take into account factors such as (1) the economic characteristics of the HUBZone area and (2) contracts being counted under multiple socioeconomic subcategories. We requested SBA’s comments on a draft of this report, and the Associate Administrator for Government Contracting, and Business Development provided written comments that are presented in appendix II. SBA agreed with our recommendations and outlined steps that it plans to take to address each recommendation. First, SBA stated that it recognizes the valid concerns we raised concerning the HUBZone map and noted that efforts are under way to improve the data and procedures used to produce this important tool. Specifically, SBA plans to issue a new contract to administer the HUBZone map and anticipates that the maps will be updated and available no later than August 29, 2008. Further, SBA stated that, during the process of issuing the new contract, the HUBZone program would issue new internal procedures to ensure that the map is continually updated. Second, SBA stated that it appreciates our concern about the need to obtain supporting documents in a more consistent manner. In line with its efforts to formalize HUBZone processes, the agency noted that it was formulating procedures that would provide sharper guidance as to when supporting documentation and site visits would be required. Specifically, SBA plans to identify potential areas of concern during certification that would mandate additional documentation and site visits. Third, SBA noted that the HUBZone program had obtained additional staff to work through the backlog of pending recertifications and stated that this effort would be completed by September 30, 2008. Further, to ensure that recertifications will be handled in a more timely manner, SBA stated that the HUBZone program has made dedicated staffing changes and will issue explicit changes to procedures. Fourth, SBA stated that it is aware of the need to improve the effectiveness and consistency of the decertification process. SBA noted that it would issue new procedures to clarify and formalize the decertification process and its timelines. Among other things, SBA stated that the new decertification procedure would establish a 60-day deadline to complete any proposed decertification. Finally, SBA acknowledged that using HUBZone performance measures in a more systematized way to evaluate the program’s effectiveness would be beneficial and would provide important new information to improve and focus the HUBZone program. Therefore, SBA stated that it would develop an assessment tool to measure the economic benefits that accrue to areas in the HUBZone program and that the HUBZone program would then issue periodic reports accompanied by the underlying data. We also provided copies of the draft report to Commerce, DOD, DHS, and SSA. All four agencies responded that they had no comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Ranking Member, House Committee on Small Business, other interested congressional committees, and the Administrator of the Small Business Administration. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To review the Small Business Administration’s (SBA) administration and oversight of the HUBZone program, we examined (1) the criteria and process that SBA uses to identify and map HUBZone areas and the economic characteristics of such areas; (2) the mechanisms that SBA uses to ensure that only eligible small businesses participate in the HUBZone program; and (3) the actions SBA has taken to assess the results of the program and the extent to which federal agencies have met their HUBZone contracting goals. To identify the criteria that SBA uses to identify HUBZone areas, we reviewed applicable statutes, regulations, and agency documents. Because the HUBZone program also uses statutory definitions from the Department of Housing and Urban Development’s (HUD) low-income-housing tax credit program, we reviewed the statutes and regulations underlying the definitions of a qualified census tract and difficult development area. To determine the process that SBA uses to identify HUBZone areas, we interviewed SBA officials and the contractor that developed and maintains the HUBZone map on SBA’s Web site. We also reviewed the policies and procedures the contractor follows when mapping HUBZone areas. Using historical data provided by SBA’s mapping contractor, we determined how the number of HUBZone areas has changed over time. We also used these historical data to determine if SBA had complied with its policy of asking the contractor to update the map every time the HUBZone area definition changed or new data used to designate HUBZone areas (for example, HUD’s lists of difficult development areas and unemployment data from the Bureau of Labor Statistics or BLS) became available. To assess the accuracy of the current HUBZone map, we compared the difficult development areas on the map with the statutory definition of a difficult development area. We also compared HUD’s 2008 list of qualified census tracts to the areas designated on the map and analyzed 2007 unemployment data from BLS (the most recent available) to determine if all of the nonmetropolitan counties that met the HUBZone eligibility criteria were on the map. Once we identified the current HUBZone areas, we used 2000 census data (the most complete data set available) to examine the economic characteristics of these areas. The 2000 census data are sample estimates and are, therefore, subject to sampling error. To test the impact of these errors on the classification of HUBZone areas, we simulated the potential results by allowing the estimated value to change within the sampling error distribution of the estimate and then reclassified the results. As a result of these simulations, we determined that the sampling error of the estimates had no material impact on our findings. For metropolitan and nonmetropolitan-qualified census tracts, nonmetropolitan counties, and difficult development areas in the 50 states and District of Columbia, we looked at common indicators of economic distress—poverty rate, unemployment rate, median household income, and median housing value. In measuring median household income and median housing value, we compared each HUBZone with the metropolitan area (for metropolitan-qualified census tracts) in which it was located or with the state nonmetropolitan area (for nonmetropolitan-qualified census tracts, nonmetropolitan counties, and difficult development areas) to put the values into perspective. We limited our analysis of Indian Country to poverty and unemployment rates because Indian lands vary in nature; therefore, no one unit of comparison worked for all areas when reporting median housing income and median housing value. We could not examine the economic characteristics of base closure areas because they do not coincide with areas for which census data are collected. To further examine the economic characteristics of qualified HUBZone areas, we analyzed the effect of hypothetical changes to the economic criteria used to designate qualified census tracts and nonmetropolitan counties. (We report the results of this analysis in app. III.) First, we adjusted the economic criteria used to designate qualified census tracts: (1) a poverty rate of at least 25 percent or (2) 50 percent or more of the households with incomes below 60 percent of each area’s median gross income. Second, we adjusted the criteria used to designate nonmetropolitan counties: (1) a median household income of less than 80 percent of the median household income for the state nonmetropolitan area or (2) an unemployment rate not less than 140 percent of the state or national unemployment rate (whichever is lower). In both cases, we made the criteria more stringent as well as less stringent. We assessed the reliability of the census and BLS data we used to determine the economic characteristics of HUBZone areas by reviewing information about the data and performing electronic data testing to detect errors in completeness and reasonableness. We determined that the data were sufficiently reliable for the purposes of this report. To determine how SBA ensures that only eligible small businesses participate in the HUBZone program, we reviewed policies and procedures established by SBA for certifying and monitoring HUBZone firms and internal control standards for federal agencies. We also interviewed SBA headquarters and field officials regarding the steps they take to certify and monitor HUBZone firms. We then assessed the actions that SBA takes to help ensure that only eligible firms participate against its policies and procedures and selected internal controls. In examining such compliance, we analyzed data downloaded from the HUBZone Certification Tracking System (the information system used to manage the HUBZone program) as of January 22, 2008, to determine the extent of SBA monitoring. Specifically, we analyzed the data to determine (1) the number of applications submitted in fiscal years 2000 through 2007 and their resolution; (2) the number of recertifications that SBA performed in fiscal years 2005 through 2007 and their results; (3) the number of recertifications conducted of HUBZone firms based on the number of years firms had been in the program; (4) the number of program examinations that SBA performed in fiscal years 2004 through 2007 and their results; (5) the number of program examinations conducted of HUBZone firms based on the number of years firms had been in the program; and (6) the number of firms proposed for decertification in fiscal years 2004 through 2007. We also analyzed Federal Procurement Data System-Next Generation (FPDS-NG) data to determine the extent to which firms that had been proposed for decertification or had actually been decertified had obtained federal contracts. Because the HUBZone Certification Tracking System does not readily provide information on the extent to which SBA requests documentation from firms or conducts site visits during certification and monitoring, we conducted reviews of all 125 applications, 15 recertifications, and 11 program examinations begun in September 2007 and completed by January 22, 2008 (the date of the data set). For applications, we selected those that were logged into the system in September 2007. For recertifications and program examinations, we selected those cases where the firm had acknowledged receipt of the notice that they had been selected for review in September 2007; we chose September 2007 because most of the cases had been processed by January 22, 2008. Further, we analyzed (1) FPDS-NG data for fiscal year 2006 (the most recent year available at the time of our analysis) and (2) Dynamic Small Business Source System (DSBSS) data as of December 12, 2007, to identify select characteristics of businesses that participated in the program. DSBSS contains information on firms that have registered in the Central Contractor Registration system (a database that contains information on all potential federal contractors) as small businesses. We assessed the reliability of the HUBZone Certification Tracking System, FPDS-NG, and DSBSS data we used by reviewing information about the data and performing electronic data testing to detect errors in completeness and reasonableness. We determined that the data were sufficiently reliable for the purposes of this report. To determine the measures that SBA has in place to assess the results of the HUBZone program, we reviewed SBA’s performance reports and other agency documents. We then compared SBA’s performance measures for the HUBZone program to our guidance on the attributes of effective performance measures. To determine the extent to which federal agencies have met their contracting goals, we (1) analyzed data from FPDS-NG and (2) reviewed SBA reports on agency contracting goals and accomplishments, such as federal contracting dollars awarded by agency for the various small business programs, for fiscal years 2003 through 2006. We also reviewed Federal Acquisition Regulation and SBA guidance and other relevant documentation. In addition, we interviewed small business and contracting officials at a nongeneralizable sample of agencies (the Departments of Commerce, Defense, Homeland Security and the Social Security Administration) to determine what factors affect federal agencies’ ability to meet HUBZone contracting goals. We selected agencies that received a range of scores as reported in SBA’s fiscal year 2006 Small Business Procurement Scorecard and awarded varying amounts of contracts to HUBZone firms. To explore benefits that the program may have generated for selected firms and communities, we visited a nongeneralizable sample of four HUBZone areas: Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California. In selecting these areas, we considered geographic dispersion, the type of HUBZone area, and the dollar amount of contracts awarded to HUBZone firms. During each site visit, we interviewed officials from the SBA district office, the Chamber of Commerce, a small business development center, and certified HUBZone firms, with the exception of the city of Long Beach, where we did not meet with the Chamber of Commerce. We conducted this performance audit from August 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In this appendix, we provide information on the economic characteristics of three types of HUBZone areas: (1) qualified census tracts, which have 50 percent or more of their households with incomes below 60 percent of the area median gross income or have a poverty rate of at least 25 percent and cannot contain more than 20 percent of the area population; (2) qualified Indian reservations, which include lands covered by a federal statutory definition of “Indian Country;” and (3) qualified nonmetropolitan counties, or those having a median household income of less than 80 percent of the median household income for the state nonmetropolitan area or an unemployment rate that is not less than 140 percent of the state average unemployment rate or the national average unemployment rate (whichever is lower). Other types of HUBZone areas are base closure areas and difficult development areas. First, we report economic data for those HUBZone areas that are nonmetropolitan-qualified census tracts and Indian Country areas. Second, to further illustrate the economic diversity among qualified HUBZone areas, we provide data on the effect of hypothetical changes to the economic criteria used to designate metropolitan-qualified census tracts and nonmetropolitan counties. Based on poverty rates, nonmetropolitan-qualified census tracts appear to be as economically distressed as metropolitan-qualified census tracts. About 99 percent of nonmetropolitan census tracts (excluding redesignated areas, which no longer meet the economic criteria but by statute remain eligible until after the release of the 2010 decennial census data) had a poverty rate of 20 percent or more (see fig. 9). Similarly, about 93 percent of metropolitan census tracts (excluding redesignated areas) met this criterion. However, there are some differences between the economic characteristics of nonmetropolitan- and metropolitan-qualified census tracts. For example, 402 of the 1,272 nonmetropolitan census tracts (about 32 percent) had housing values that were less than 60 percent of the area median housing value, while 57 percent of metropolitan census tracts had housing values that met this criterion. Overall, we found that qualified Indian Country areas tend to be economically distressed (see fig. 10). For example, 310 of the 651 Indian Country areas (about 48 percent) had poverty rates of 20 percent or more. In addition, Indian Country areas had much higher rates of unemployment than any other type of HUBZone area. For example, 160 Indian Country areas (about 25 percent) had unemployment rates of 20 percent or more. In contrast, metropolitan census tracts and nonmetropolitan counties (excluding redesignated areas) had unemployment rates that met this same criterion of about 18 percent and just less than 2 percent, respectively. As discussed above, qualified HUBZone areas are economically diverse; therefore, adjustments to the qualifying criteria could affect the number and type of eligible areas. Qualified census tracts must meet at least one of two economic criteria: (1) have a poverty rate of at least 25 percent or (2) be an area in which 50 percent or more of the households have incomes below 60 percent of the area’s median gross income. By using a poverty rate of 10 percent or more for metropolitan census tracts, however, 14,258 additional metropolitan census tracts could be eligible for the program (an increase of about 143 percent), depending on whether they met the other eligibility requirements (see table 7). In contrast, by using a poverty rate of 40 percent or more for metropolitan census tracts, the number of metropolitan census tracts (those tracts that currently meet eligibility criteria and those that are redesignated) could decrease from 9,959 to 2,270 (a decrease of about 77 percent). Qualified nonmetropolitan counties are also determined by two economic criteria: (1) a median household income of less than 80 percent of the median household income for the state nonmetropolitan area or (2) an unemployment rate not less than 140 percent of the state or national unemployment rate (whichever is lower). By using a county median household income of less than 90 percent of the median household income for the state nonmetropolitan area, 29 additional nonmetropolitan counties could be eligible for the program (see table 8). By using a county median household income of less than 70 percent of the median household income for the state nonmetropolitan area, the number of eligible HUBZone- qualified nonmetropolitan counties could decrease from 1,162 to 43 (about 96 percent). To examine the characteristics of HUBZone firms, we analyzed data from SBA’s Dynamic Small Business Source System (DSBSS) as of December 12, 2007. DSBSS contains information on firms that have registered as small businesses in the Central Contractor Registration system (a database that contains information on all potential federal contractors). With the exception of information on the firms’ HUBZone, 8(a), and Small Disadvantaged Business certifications, the data in the system are self- reported. We found that HUBZone firms vary in size, ownership, types of services and products provided, and additional small business designations leveraged. Specifically, our analysis showed the following: The size of HUBZone firms varies. We chose two measures to describe the size of HUBZone firms—number of employees and average gross revenue. The average number of staff at HUBZone firms was 24. However, half of HUBZone firms had 6 or fewer employees. The average gross revenue for HUBZone firms was almost $3.5 million per year. However, half of HUBZone firms earned $600,000 or less annually. Ownership status is diverse. Approximately 30 percent of HUBZone firm owners were women, while 37 percent were minorities. Table 9 breaks out the owners of HUBZone firms based on race and ethnicity. HUBZone firms operate in a variety of industries as defined by North American Industry Classification System (NAICS) codes, and many operate in multiple industries. Table 10 lists the top 10 industries in which HUBZone firms operated and the number of HUBZone firms that provided a service or product related to that industry. HUBZone firms often have other small business designations. Although the majority of HUBZone firms had only the HUBZone designation, 32 percent had one additional designation, which was most often the service-disabled, veteran-owned designation. Table 11 shows the extent to which HUBZone firms had other small business designations. We conducted site visits to four HUBZone areas—Lawton, Oklahoma; Lowndes County, Georgia; and Long Beach and Los Angeles, California— to better understand to what extent benefits have been generated by the HUBZone program. These four areas represent various types of HUBZone areas (see table 12), and we found that the perceived benefits of the HUBZone program varied across these locations. The majority of the individuals we interviewed indicated that their firms had received some benefit from HUBZone certification. In most cases, they cited as a benefit the ability to compete for and win contracts, which in some cases had allowed firms to expand or become more competitive. However, representatives of a few firms indicated they had not been able to win any contracts through the program, which made it difficult to realize any benefits. We also asked local economic development and Chamber of Commerce officials if they were familiar with the HUBZone program. We found varying levels of familiarity with the program, and some officials representing economic development entities stated they lacked information on the program’s effect that could help them inform small businesses of its potential benefits. Various representatives of HUBZone firms with whom we spoke stated that the HUBZone program provided advantages. The majority of representatives of HUBZone firms we interviewed stated that HUBZone certification had provided them with an additional opportunity to bid on federally funded contracts. Additionally, some of the business owners we interviewed who had received contracts stated that winning contracts through the HUBZone program had allowed their firm to grow (for example, to hire employees or expand operations). Representatives from two HUBZone firms located in Lawton, Oklahoma, that had received contracts through their HUBZone certification stated that the primary benefits associated with their HUBZone certification had been winning contracts that allowed them to hire additional employees and continue to build a reputation for their firms, which in turn had placed them in a better position to compete for additional contracts. Representatives of a HUBZone firm located in Valdosta, Georgia, stated that they had utilized the HUBZone program to obtain more contracts for their construction firm. They added that the program had allowed their firm to enter the federal government contracting arena, which provided additional opportunities aside from private-sector construction contracts. Representatives from three HUBZone firms in Los Angeles stated that they had won contracts through the program and had been able to build a stronger reputation for their firms by completing those contracts. Representatives of two of these firms also stated that the contracts they won through the program had helped their firms to grow and hire additional employees. For example, representatives from one HUBZone firm we interviewed stated that the firm had hired 10 to 15 full-time employees partly as a result of obtaining HUBZone contracts. However, representatives of some HUBZone firms stated that the program has not generated any particular benefits for their firm. For example, representatives of two HUBZone firms in Lawton, Oklahoma, and one HUBZone firm in Valdosta, Georgia, stated that their HUBZone certification had resulted in no contracts or not enough contracts to provide opportunities to “grow” their firm. They noted that the HUBZone certification alone was not sufficient when competing for federally funded contracts, particularly because—based on their experience—few contracts were set aside for HUBZone firms. Our interviewees indicated that they planned to stay in the program but were unlikely to see any benefits unless additional contracts were set aside for HUBZone firms. A representative from one HUBZone firm located in Long Beach, California, stated that her HUBZone firm had not been awarded any contracts directly through the program, but because of the firm’s HUBZone status, it had been able to perform work as a subcontractor on contracts that had HUBZone subcontracting goals. However, her firm had not grown or expanded employment through the program. We also found that, while some local economic development and Chamber of Commerce officials with whom we spoke were familiar with the HUBZone program, others were not. For example, in Lawton, Oklahoma, local economic development and Chamber of Commerce officials were familiar with the program and its requirements, largely because the city of Lawton has been designated a HUBZone area. In Valdosta, Georgia, Chamber of Commerce officials and officials from various economic development authorities were not familiar with the program and its requirements, but the small business development center official we interviewed was familiar with the program. In Long Beach and Los Angeles, California, most of the small business development center and economic development officials with whom we met also were relatively unfamiliar with the program, its goals, and how small businesses could use the program. Finally, officials representing economic development entities in Lowndes County, Georgia, and Los Angeles, California, stated that they lacked information on the program’s impact that could help them inform small businesses of its potential benefits. In addition to the contact named above, Paige Smith (Assistant Director), Triana Bash, Tania Calhoun, Bruce Causseaux, Alison Gerry, Cindy Gilbert, Julia Kennon, Terence Lam, Tarek Mahmassani, John Mingus, Marc Molino, Barbara Roesmann, and Bill Woods made key contributions to this report. | The Small Business Administration's (SBA) Historically Underutilized Business Zone (HUBZone) program provides federal contracting assistance to small firms located in economically distressed areas, with the intent of stimulating economic development. Questions have been raised about whether the program is targeting the locations and businesses that Congress intended to assist. GAO was asked to examine (1) the criteria and process that SBA uses to identify and map HUBZone areas and the economic characteristics of such areas, (2) the mechanisms SBA uses to ensure that only eligible small businesses participate in the program, and (3) the actions SBA has taken to assess the results of the program and the extent to which federal agencies have met their HUBZone contracting goals. To address these objectives, GAO analyzed statutory provisions, as well as SBA, census, and contracting data, and interviewed SBA and other federal and local officials. SBA relies on federal law to identify qualified HUBZone areas based on provisions such as median income in census tracts, but the map it uses to publicize HUBZone areas is inaccurate, and the economic characteristics of designated areas vary widely. To help firms determine if they are located in a HUBZone area, SBA publishes a map on its Web site. However, the map contains areas that are not eligible for the program and excludes some eligible areas. As a result, ineligible small businesses have been able to participate in the program, and eligible businesses have not been able to participate. Revisions to the statutory definition of HUBZone areas (such as allowing continued inclusion of areas that ceased to be qualified) have nearly doubled the number of areas and created areas that are less economically distressed than areas designated under the original criteria. Such an expansion could diffuse the benefits to be derived from steering businesses to economically distressed areas. The mechanisms that SBA uses to certify and monitor firms provide limited assurance that only eligible firms participate in the program. Although internal control standards state that agencies should verify information they collect, SBA verifies the information reported by firms on their application or during recertification--its process for monitoring firms--in limited instances and does not follow its own policy of recertifying all firms every 3 years. GAO found that more than 4,600 firms that had been in the program for at least 3 years went unmonitored. Further, SBA lacks a formal policy on how quickly it needs to make a final determination on decertifying firms that may no longer be eligible for the program. Of the more than 3,600 firms proposed for decertification in fiscal years 2006 and 2007, more than 1,400 were not processed within 60 days--SBA's unwritten target. As a result of these weaknesses, there is an increased risk that ineligible firms have participated in the program and had opportunities to receive federal contracts based on their HUBZone certification. SBA has taken limited steps to assess the effectiveness of the HUBZone program, and from 2003 to 2006 federal agencies did not meet the government-wide contracting goal for the HUBZone program. While SBA has some measures to assess the results of the HUBZone program, they are not directly linked to the program's mission, and the agency has not implemented its plans to conduct an evaluation of the program based on variables tied to the program's goals. Consequently, SBA lacks key information to manage the program and assess performance. Contracting dollars awarded to HUBZone firms increased from fiscal year 2003 to 2006, but consistently fell short of the government-wide goal of awarding 3 percent of annual contracting dollars to HUBZone firms. According to contracting officials GAO interviewed, factors such as conflicting guidance on how to consider the various small business programs when awarding contracts and a lack of HUBZone firms in certain industries may have affected the ability of federal agencies to meet their HUBZone goals. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The space shuttle is the world’s first reusable space transportation system. It consists of a reusable orbiter with three main engines, two partially reusable solid rocket boosters, and an expendable external fuel tank. Since it is the nation’s only launch system capable of carrying people to and from space, the shuttle’s viability is important to NASA’s other space programs, such as the International Space Station. NASA operates four orbiters in the shuttle fleet. Space systems are inherently risky because of the technology involved and the complexity of their activities. For example, thousands of people perform about 1.2 million separate procedures to prepare a shuttle for flight. NASA has emphasized that the top priority for the shuttle program is safety. The space shuttle’s workforce shrank from about 3,000 to about 1,800 full- time equivalent employees from fiscal year 1995 through fiscal year 1999. A major element of this workforce reduction was the transfer of shuttle launch preparation and maintenance responsibilities from the government and multiple contractors to a single private contractor. NASA believed that consolidating shuttle operations under a single contract would allow it to reduce the number of engineers, technicians, and inspectors directly involved in the day-to-day oversight of shuttle processing. However, the agency later concluded that these reductions caused shortages of required personnel to perform in-house activities and maintain adequate oversight of the contractor. Since the shuttle’s first flight in 1981, the space shuttle program has developed and incorporated many modifications to improve performance and safety. These include a super lightweight external tank, cockpit display enhancements, and main engine safety and reliability improvements. In 1994, NASA stopped approving additional upgrades, pending the potential replacement of the shuttle with another reusable launch vehicle. NASA now believes that it will have to maintain the current shuttle fleet until at least 2012, and possibly through 2020. Accordingly, it has established a development office to identify and prioritize upgrades to maintain and improve shuttle operational safety. Last year, we reported that several internal studies showed that the shuttle program’s workforce had been negatively affected by downsizing. These studies concluded that the existing workforce was stretched thin to the point where many areas critical to shuttle safety—such as mechanical engineering, computer systems, and software assurance engineering— were not sufficiently staffed by qualified workers. (Appendix I identifies all of the key areas that were facing staff shortages). Moreover, the workforce was showing signs of overwork and fatigue. For example, indicators on forfeited leave, absences from training courses, and stress- related employee assistance visits were all on the rise. Lastly, the program’s demographic shape had changed dramatically. Throughout the Office of Space Flight, which includes the shuttle program, there were more than twice as many workers over 60 years old than under 30 years old. This condition clearly jeopardized the program’s ability to hand off leadership roles to the next generation. According to NASA’s Associate Administrator for the Office of Space Flight, the agency faced significant safety and mission success risks because of workforce issues. This was reinforced by NASA’s Aerospace Safety Advisory Panel, which concluded that workforce problems could potentially affect flight safety as the shuttle launch rate increased. NASA subsequently recognized the need to revitalize its workforce and began taking actions toward this end. In October 1999, NASA’s Administrator directed the agency’s highest-level managers to consider ways to reduce workplace stress. The Administrator later announced the creation of a new office to increase the agency’s emphasis on health and safety and included improved health monitoring as an objective in its fiscal year 2001 performance plan. Finally, in December 1999, NASA terminated its downsizing plans for the shuttle program and initiated efforts to begin hiring new staff. Following the termination of its downsizing plans, NASA and the Office of Management and Budget conducted an overall workforce review to examine personnel needs, barriers to achieving proper staffing levels and skill mixes, and potential reforms to help address the agency’s long-term requirements. In performing this review, NASA used GAO’s human capital self-assessment checklist. The self-assessment framework provides a systematic approach for identifying and addressing human capital issues and allows agency managers to (1) quickly determine whether their approach to human capital supports their vision of who they are and what they want to accomplish and (2) identify those policies that are in particular need of attention. The checklist follows a five-part framework that includes strategic planning, organizational alignment, leadership, talent, and performance culture. NASA has taken a number of actions this year to regenerate its shuttle program workforce. Significantly, NASA’s current budget request projects an increase of more than 200 full-time equivalent staff for the shuttle program through fiscal year 2002—both new hires and staff transfers. According to NASA, from the beginning of fiscal year 2000 through July 2001, the agency had actually added 191 new hires and 33 transfers to the shuttle program. These new staff are being assigned to areas critical to shuttle safety—such as project engineering, aerospace vehicle design, avionics, and software—according to NASA. As noted earlier, appendix I provides a list of critical skills where NASA is addressing personnel shortages. NASA is also focusing more attention on human capital management in its annual performance plan. The Government Performance and Results Act requires a performance plan that describes how an agency’s goals and objectives are to be achieved. These plans are to include a description of the (1) operational processes, skills, and technology and (2) human, capital and information resources required to meet those goals and objectives. On June 9, 2000, the President directed the heads of all federal executive branch agencies to fully integrate human resources management into agency planning, budget, and mission evaluation processes and to clearly state specific human resources management goals and objectives in their strategic and annual performance plans. In its Fiscal Year 2002 Performance Plan, NASA describes plans to attract and retain a skilled workforce. The specifics include the following: Developing an initiative to enhance NASA’s recruitment capabilities, focusing on college graduates. Cultivating a continued pipeline of talent to meet future science, math, and technology needs. Investing in technical training and career development. Supplementing the workforce with nonpermanent civil servants, where it makes sense. Funding more university-level courses and providing training in other core functional areas. Establishing a mentoring network for project managers. We will provide a more detailed assessment of the agency’s progress in achieving its human capital goals as part of our review of NASA’s Fiscal Year 2002 Performance Plan requested by Senator Fred Thompson. Alongside these initiatives, NASA is in the process of responding to a May 2001 directive from the Office of Management and Budget on workforce planning and restructuring. The directive requires executive agencies to determine (1) what skills are vital to accomplishing their missions, (2) how changes expected in the agency’s work will affect human resources, (3) how skill imbalances are being addressed, (4) what challenges impede the agency’s ability to recruit and retain high-quality staff, and (5) what barriers there are to restructuring the workforce. NASA officials told us that they have already made these assessments. The next step is to develop plans specific to the space flight centers that focus on recruitment, retention, training, and succession and career development. If effectively implemented, the actions that NASA has been taking to strengthen the shuttle workforce should enable the agency to carry out its mission more safely. But there are considerable challenges ahead. For example, as noted by the Aerospace Safety Advisory Panel in its most recent annual report, NASA now has the difficult task of training new employees and integrating them into organizations that are highly pressured by the shuttle’s expanded flight rates associated with the International Space Station. As we stressed in our previous testimony, training alone may take as long as 2 years, while workload demands are higher than ever. The panel also emphasized that (1) stress levels among some employees are still a matter of concern; (2) some critical areas, such as information technology and electrical/electronic engineering, are not yet fully staffed; and (3) NASA is still contending with the retirements of senior employees. Officials at Johnson Space Center also cited critical skill shortages as a continuing problem. Furthermore, NASA headquarters officials stated that the stress-related effects of the downsizing remain in the workforce. Addressing these particular challenges, according to the Aerospace Safety Advisory Panel, will require immediate actions, such as expanded training at the Centers, as well as a long-term workforce plan that will focus on retention, recruitment, training, and succession and career development needs. The workforce problems we identified during our review are not unique to NASA. As our January 2001 Performance and Accountability Series reports made clear, serious federal human capital shortfalls are now eroding the ability of many federal agencies—and threatening the ability of others—to economically, efficiently, and effectively perform their missions. As the Comptroller General recently stated in testimony, the problem lies not with federal employees themselves, but with the lack of effective leadership and management, along with the lack of a strategic approach to marshaling, managing, and maintaining the human capital needed for government to discharge its responsibilities and deliver on its promises.To highlight the urgency of this governmentwide challenge, in January 2001, we added strategic human capital management to our list of federal programs and operations identified as high risk. Our work has found human capital challenges across the federal government in several key areas. First, high-performing organizations establish a clear set of organizational intents—mission, vision, core values, goals and objectives, and strategies—and then integrate their human capital strategies to support these strategic and programmatic goals. However, under downsizing, budgetary, and other pressures, agencies have not consistently taken a strategic, results-oriented approach to human capital planning. Second, agencies do not have the sustained commitment from leaders and managers needed to implement reforms. Achieving this can be difficult to achieve in the face of cultural barriers to change and high levels of turnover among management ranks. Third, agencies have difficulties replacing the loss of skilled and experienced staff, and in some cases, filling certain mission-critical occupations because of increasing competition in the labor market. Fourth, agencies lack a crucial ingredient found in successful organizations: organizational cultures that promote high performance and accountability. At this time last year, NASA planned to develop and begin equipping the shuttle fleet with a variety of safety and supportability upgrades, at an estimated cost of $2.2 billion. These upgrades would affect every aspect of the shuttle system, including the orbiter, external tank, main engine, and solid rocket booster. Last year, we reported that NASA faced a number of programmatic and technical challenges in making these upgrades. First, several upgrade projects had not been fully approved, creating uncertainty within the program. Second, while NASA had begun to establish a dedicated shuttle safety upgrade workforce, it had not fully determined its needs in this area. Third, the shuttle program was subject to considerable scheduling pressure, which introduced the risk of unexpected cost increases, funding problems, and/or project delays. Specifically, the planned safety upgrade program could require developing and integrating at least nine major improvements in 5 years—possibly making it the most aggressive modification effort ever undertaken by the shuttle program. At the same time, technical requirements for the program were not yet fully defined, and upgrades were planned to coincide with the peak assembly period of the International Space Station. Since then, NASA has made some progress but has only partially addressed the challenges we identified last year. Specifically, NASA has started to define and develop some specific shuttle upgrades. For example, requirements for the cockpit avionics upgrade have been defined. Also, Phase I of the main engine advanced health monitoring system is in development, and Friction Stir Welding on the external tank is being implemented. In addition, according to Shuttle Development Office officials, staffing for the upgrade program is adequate. Since our last report, these officials told us that the Johnson Space Center has added about 70 people to the upgrade program, while the Marshall Space Flight Center has added another 50 to 60 people. We did not assess the quality or sufficiency of the added staff, but according to the development office officials, the workforce’s skill level has improved to the point where the program has a “good” skill base. Nevertheless, NASA has not yet fully defined its planned upgrades. The studies on particular projects, such as developing a crew escape system, are not expected to be done for some time. Moreover, our previous concerns with the technical maturity and potential cost growth of particular projects have proven to be warranted. For example, the implementation of the electric auxiliary power unit has been delayed indefinitely because of technical uncertainties and cost growth. Also, the estimated cost of Phase II of the main engine advanced health monitoring system has almost doubled, and NASA has canceled the proposed development of a Block III main engine improvement because of technological, cost, and schedule uncertainties. Compounding the challenges that NASA is facing in making its upgrades is the uncertainty surrounding its shuttle program. NASA is attempting to develop alternatives to the space shuttle, but it is not yet clear what these alternatives will be. We recently testified before the Subcommittee on Space and Aeronautics, House Committee on Science on the agency’s Space Launch Initiative. This is a risk reduction effort aimed at enabling NASA and industry to make a decision in the 2006 time frame on whether the full-scale development of a reusable launch vehicle can be undertaken. However, as illustrated by the difficulties NASA experienced with another reusable launch vehicle demonstrator—the Lockheed Martin X-33—an exact time frame for the space shuttle’s replacement cannot be determined at this time. Consequently, shuttle workforce and upgrade issues will need to be considered without fully knowing how the program will evolve over the long run. In conclusion, NASA has made a start at addressing serious workforce problems that could undermine space shuttle safety. It has also begun undertaking the important task of making needed safety and supportability upgrades. Nevertheless, the challenges ahead are significant—particularly because NASA is operating in an environment of uncertainty and it is still contending with the effects of its downsizing effort. As such, it will be exceedingly important that NASA sustain its attention and commitment to making space shuttle operations as safe as possible. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that you or Members of the Subcommittee may have. For further contact regarding this testimony, please contact Allen Li at (202) 512-4841. Individuals making key contributions to this testimony included Jerry Herley, John Gilchrist, James Beard, Fred Felder, Vijay Barnabas, and Cristina Chaplain. | In August 2000, the National Aeronautics and Space Administration's (NASA) space shuttle program was at a critical juncture. Its workforce had declined significantly since 1995, its flight rate was to double to support the assembly of the International Space Station, and costly safety upgrades were planned to enhance the space shuttle's operation until at least 2012. Workforce reductions were jeopardizing NASA's ability to safely support the shuttle's planned flight rate. Recognizing the need to revitalize the shuttle's workforce, NASA ended its downsizing plans for the shuttle program and began to develop and equip the shuttle fleet with various safety and supportability upgrades. NASA is making progress in revitalizing the shuttle program's workforce. NASA's current budget request projects an increase of more than 200 full-time equivalent staff through fiscal year 2002. NASA has also focused more attention on human capital management in its annual performance plan. However, considerable challenges still lie ahead. Because many of the additional staff are new hires, they will need considerable training and will need to be integrated into the shuttle program. Also, NASA still needs to fully staff areas critical to shuttle safety; deal with critical losses due to retirements in the coming years; and, most of all, sustain management attention to human capital reforms. Although NASA is making strides in revitalizing its workforce, its ability to implement safety upgrades in a timely manner is uncertain. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Forest Service, Bureau of Land Management, Fish and Wildlife Service, and National Park Service manage more than 670 million acres of federal lands across the country (see fig. 1). Each agency has a unique mission, focused on priorities that shape how it manages these lands. Specifically: The Forest Service manages land for multiple uses, including timber, recreation, and watershed management and to sustain the health, diversity, and productivity of the nation’s forests and grasslands to meet the needs of present and future generations. The Forest Service manages lands under its jurisdiction through nine regional offices and 155 national forests and 20 grasslands. The Bureau of Land Management also manages land for multiple uses, including recreation; range; timber; minerals; watershed; wildlife and fish; natural scenic, scientific, and historical values; and the sustained yield of renewable resources. The agency manages public lands under its jurisdiction through 12 state offices; each state office has several subsidiary district and field offices. The Fish and Wildlife Service manages the National Wildlife Refuge System, a network of lands and waters that provides for the conservation of fish, wildlife, and plants and their habitats, as well as opportunities for wildlife-dependent recreation, including hunting, fishing, and wildlife observation. The refuge system includes about 585 refuges. Individual refuges known as stand-alone refuges report directly to one of eight regional offices, or refuges may be grouped with others into a complex under a common manager, who in turn reports to a regional office. The National Park Service manages the 393 units of the National Park System to conserve the scenery, natural and historic objects, and wildlife of the system so that they will remain unimpaired for the enjoyment of this and future generations. Individual park units have varied designations corresponding to the natural or cultural features they are to conserve, including national parks, monuments, lakeshores, seashores, recreation areas, preserves, and historic sites. The agency has established seven regional offices. To respond to and investigate illegal activities occurring on the lands they manage, the agencies employ uniformed law enforcement officers who patrol federal lands, respond to illegal activities, and conduct routine investigations. In addition, the agencies have investigative special agents who investigate serious crimes in more detail. In this report we use the term “law enforcement officer” to include both uniformed law enforcement officers and investigative special agents, unless noted otherwise. In each of the four agencies, different officials make decisions about law enforcement resource needs. The Forest Service’s law enforcement and investigations program is “straightlined,” meaning that law enforcement officers in the field report to law enforcement officials at a regional office, who in turn report to law enforcement officials at agency headquarters in Washington, D.C. The Forest Service has a budget line item for law enforcement, and within budget constraints, its Director of Law Enforcement and Investigations has authority to make decisions about the number of uniformed officers and investigative agents to employ and where to assign them. In contrast, for the three Interior agencies, law enforcement officials and unit or regional land managers share decision- making authority for the law enforcement programs: in general, law enforcement officials make decisions about the number and location of agents, while land managers—such as a Bureau of Land Management state director, a refuge manager, or a park superintendent—make decisions about uniformed officers for their specific land units. Land managers determine how much of their overall budget they want to allocate to law enforcement activities. This budget must cover each unit’s expenditures for law enforcement, maintenance, visitor services, resource management, and other operations. State and local law enforcement agencies, as well as other federal agencies, may also play a role in responding to illegal activities occurring on lands managed by the four land management agencies. For example, on some federal lands, state and local law enforcement officers have sole responsibility for responding to certain crimes, such as violent crimes, and on other federal lands, the responsibility for responding to most crimes is shared among federal, state, and local law enforcement officers. In some locations, state and local law enforcement agencies have entered into agreements allowing federal land management agencies’ law enforcement officers to act as state and local law enforcement officers on federal lands. Specifically, such agreements may allow the land management agencies’ law enforcement officers to enforce state laws, such as traffic laws. Other agreements may allow local law enforcement officers to enforce certain federal laws and regulations, such as fishing and hunting restrictions, on federal lands. And other federal agencies also enforce laws and respond to illegal activities on federal lands. For example, Border Patrol—an office within the Department of Homeland Security—is responsible for controlling and guarding the borders of the United States against the illegal entry and smuggling of people, drugs, or other contraband, and the Drug Enforcement Administration in the Department of Justice enforces federal laws regarding controlled substances. As in America’s cities, suburbs, and rural areas, a wide variety of illegal activities occurs on federal lands around the nation, damaging natural and cultural resources and threatening the safety of the public and agency employees. But it is unknown how often these illegal activities occur because agency data do not fully capture the occurrence and magnitude of such activities; similarly, the extent of resource damage and threats to public and agency employee safety is also unknown. Although agency data are insufficient to quantify the extent of illegal activities or their effects, the data identify a variety of illegal activities occurring on federal lands, ranging from traffic violations to theft of natural and cultural resources to violent crimes. These activities may have overlapping effects on natural, cultural, and historical resources; public access and safety; and the safety of agency employees. Available information does not allow land management agencies to fully identify either the occurrence of illegal activities on federal lands or the effects of those activities on resources, the public, and agency employees. The agencies maintain data on law enforcement incidents, including information such as the type of crime, characteristics of victims and offenders, and types and value of resources or property damaged or stolen. These data, however, cannot be used to monitor trends in the occurrence of illegal activities on federal lands. Agency law enforcement officials told us that an inherent limitation in using these data to assess trends is that a change from one time period to another more likely reflects a change in law enforcement staffing levels or an agency’s emphasis on responding to particular types of crime than an actual change in the occurrence of crimes committed. For example, Bureau of Land Management officials told us that the lands they manage in southwestern Colorado are infrequently patrolled by law enforcement personnel and that if the agency increased the number of officers patrolling the area, the number of reported incidents would be likely to increase as well. According to these officials, the increase would most likely be due not to an actual rise in crime but simply to a rise in reported incidents because of the increased law enforcement presence. Moreover, for some illegal activities, such as violent crimes, state and local law enforcement agencies may have primary responsibility for responding even if the illegal activity occurs on federal lands, and the land management agencies may have no record that a crime occurred. Compounding these inherent shortcomings in incident data, two agencies—the National Park Service and the Fish and Wildlife Service—do not consistently collect or systematically maintain such data. Specifically, law enforcement officials said, of the National Park Service’s 393 units, about 100 units have adopted standardized incident-reporting systems, while the rest rely on ad hoc systems that the units have developed themselves. Similarly, although the Fish and Wildlife Service has developed an incident management system, according to the official responsible for managing law enforcement data, the agency does not require refuges to use it, and many refuges continue to use either a legacy data system or paper records to maintain incident data. As a result, National Park Service and Fish and Wildlife Service officials said, it is difficult for them to track regional or national trends in illegal activities. To help remedy these shortcomings in incident data, Interior, in conjunction with its component agencies, is developing a new law enforcement data system, in part to respond to a 2002 report from its Office of Inspector General, which recommended that Interior develop a departmentwide law enforcement data system. The system, known as the Incident Management Analysis and Reporting System, is being designed to improve the agencies’ ability to analyze incident data to identify trends in occurrence of illegal activities—for example, by ensuring that senior agency officials have access to similar information for all units across the country and by allowing officials to analyze incidents across agency boundaries. In addition, the system will be compatible with geographic information systems, giving law enforcement officials the ability to analyze geographic trends in illegal activities. When complete, the system has the potential to provide better information on the types of illegal activities occurring at different Interior units across the country. According to Interior’s program manager, the agencies began field-testing the new system in November 2010 and expect to deploy it fully by the end of 2012. Like the extent of illegal activities occurring on federal lands, the effects of such illegal activities on resources, the public, and agency employees are also not fully known. Agency law enforcement officials reported that their agencies do not systematically collect information on the effects of illegal activities, except in certain cases—for example, when needed as evidence in criminal investigations. At units we visited, for example, officials said they had documented damage to specific locations resulting from illegal activities, such as dumping of trash and hazardous materials, marijuana cultivation, timber theft, and unauthorized off-highway vehicle (OHV) use. Senior agency law enforcement officials said that while available information—such as quantities of trash dumped or acres of vegetation damaged to cultivate marijuana—helps them understand the effects of illegal activities on resources at specific locations, they did not believe it is feasible to quantify the effects of all illegal activities across the country. Although the four land management agencies did not have comprehensive information to determine the level of and trends in illegal activities occurring on the federal lands they manage, law enforcement officials and land managers we interviewed at 26 geographically dispersed agency units identified a variety of illegal activities that have occurred on their units. These officials also identified a variety of impacts that these activities can have on natural and cultural resources and public and employee safety. These illegal activities, described below, can be grouped into eight categories—roughly in order of severity—from least severe, such as traffic violations, to most severe, such as violent crimes. Agency law enforcement officials at several units we visited identified speeding, reckless driving, driving under the influence, and other traffic violations as a set of illegal activities that they encounter frequently on public lands. According to these officials, traffic violations on federal lands pose safety risks to park visitors and wildlife. For example, the Chief Ranger at Great Smoky Mountains National Park (located along the North Carolina-Tennessee border) estimated that park law enforcement officers spend about 70 to 80 percent of their time enforcing traffic laws. He said that about 300 accidents happen each year in the park and that park law enforcement officers arrest about 40 to 50 people annually for driving under the influence. Officials at several units also told us that the need to patrol roads may sometimes hinder their ability to protect important resources on their units. For example, the chief rangers for Great Smoky Mountains and Cumberland Gap national parks said that enforcing traffic laws left little time for law enforcement officers to patrol those parks’ backcountry areas—areas that are home to important plant and animal species. Agency law enforcement officials told us that the presence of individuals on federal lands who are publicly intoxicated or who possess or are under the influence of illegal drugs is another kind of illegal activity that they encounter frequently on their units. This activity threatens the safety of other visitors, as well as law enforcement officers. The officials told us that when an area on federal land develops a reputation as a place where people drink or use illegal drugs, the general public sometimes avoids these areas. For example, officials at the Cherokee National Forest in Tennessee said that several of the national forest’s campgrounds had developed such reputations. They said that in an effort to reduce problems related to alcohol and drug use and to increase public confidence in the safety of being in the forest, they added law enforcement patrols and prohibited alcohol use in certain campgrounds—efforts they believed had been successful. The unauthorized use of recreational vehicles, such as bicycles, boats, OHVs, and snowmobiles, is another type of illegal activity that occurs at many of the federal land units we visited. Law enforcement officials noted that when agency regulations and policies governing the use of such vehicles are violated, damage to natural or cultural resources and conflicts with other members of the public may arise. Agency officials at many units we visited reported that unauthorized use of OHVs was harming resources by causing soil erosion; damaging vegetation, including in streamside areas; fragmenting wildlife habitat; and damaging archaeological or historical sites. For example, soil and vegetation damage from unauthorized OHV use at Sonoran Desert National Monument in Arizona was severe enough that in 2007 the Bureau of Land Management closed about 55,000 acres of the monument to all motorized vehicles (see fig. 2). Unauthorized use of boats and snowmobiles can also damage resources and create public conflicts, according to officials at other units we visited. For example, Merritt Island National Wildlife Refuge in Florida has established “manatee zones”—prohibiting motorized boat traffic in some manatee zones and imposing speed limits in others—in an effort to reduce collisions between boats and manatees. Although manatee zones have helped reduce collisions, according to refuge officials, some boaters enter closed areas or exceed speed limits, and collisions still occur. Similarly, at Everglades National Park in Florida, officials reported damage to seagrass in Florida Bay from unauthorized boat traffic. The officials said that motorized boats are allowed in Florida Bay but are prohibited from touching the seafloor bottom, which is designated as wilderness. Much of Florida Bay, however, is less than 2 feet deep, and boats can run aground, or propellers can scrape seagrass growing on the bay floor, causing damage known as “prop scars” (see fig. 3). In addition, officials at the Superior National Forest in Minnesota said that unauthorized use of motorized boats and snowmobiles in closed areas diminishes the wilderness experience for visitors to the Boundary Waters Canoe Area, the nation’s most visited wilderness area. Officials at many units we visited also reported that people use federal lands for a broad range of other unauthorized purposes. For example, landowners whose property borders federal lands have constructed access roads; outbuildings; and, in some cases, houses on federal lands. In addition, hunters have built unauthorized platforms or shelters in trees to hunt from, and these structures are often accompanied by a network of OHV trails, cutting of vegetation to improve sightlines, and garbage. Other officials noted that their lands are often used for illegal dumping of household and commercial waste—including toxic or otherwise dangerous waste. According to these officials, such illegal activities can harm ecosystems, damage vegetation, reduce wildlife habitat, introduce dangerous materials into the environment, diminish public safety, and have other negative effects on natural resources and the public. For example, Sonoran Desert National Monument officials reported that dumping cases have included several dump-truck-loads of tires, more than 500 gallons of motor oil, and cyanide and explosives from mining operations. Several units we visited also reported problems with people staying in an area longer than permissible, known as illegal occupancy. In some cases, the people were in essence living on federal lands. Illegal occupancy can damage vegetation, generate garbage and human waste, affect wildlife behavior, and curtail public access to federal lands, according to agency officials. Some officials also said that some of the violators pose threats to the public. In Florida, for example, Ocala National Forest officials estimated that several hundred people lived illegally in the forest in 2006 and that these people committed other crimes, including illegal drug use, assault, and rape. Subsequently, forest officials initiated a “Reshaping the Ocala” campaign intended to deter such crimes. Officials said they increased law enforcement staff, strengthened length-of-stay orders to make them easier to enforce, and raised fines—efforts they say have reduced the effect of these types of illegal activities. Several units also reported problems with unauthorized commercial activities—such as guided hunting, rafting, and sightseeing trips—on federal lands. Officials said that commercial activities conducted without permits can take customers away from authorized businesses; detract from the experience of customers using authorized guides; and may pose safety risks to the public, since guides operating illegally may not take safety precautions or have the insurance an agency may require of operators. Moreover, since the number of permits an agency issues may be based on an assessment of cumulative effects on natural resources (e.g., permitting a certain number of commercial hunting guides to operate in an area on the basis of predicted effects on wildlife), unauthorized guides can increase pressure on those resources. Officials at many of the sites we visited reported that natural and cultural resources and government and personal property on federal lands have been stolen or damaged by illegal activities. Such theft or damage not only harms the resources—including rare species and species of commercial value—but also adds costs to the agencies and the public and diminishes the public’s enjoyment of federal lands, according to these officials. In addition, theft or vandalism of archaeological and paleontological resources can lead to the loss or destruction of irreplaceable artifacts and deprive scientists of important sources of knowledge. Some examples of these kinds of illegal activities include the following: Timber theft occurs on federal lands when a business cuts more trees than allowed under its contract with an agency or when neighboring landowners illegally remove trees from federal lands. In addition, individual trees with high commercial value may also be stolen from federal lands. For example, a law enforcement officer responsible for several national forests in Washington said that large cedar and bigleaf maple trees, often hundreds of years old, are stolen from the national forests. She estimated that a single bigleaf maple tree could be sold for about $20,000 because the wood is highly valued for making musical instruments. Theft of other forest products, including medicinal plants such as ginseng, mushrooms, ornamental landscaping plants, and greenery for floral arrangements, also occurs on federal lands. For example, officials at Cumberland Gap and Great Smoky Mountains national parks and the Cherokee National Forest said that while they do not know exactly how often ginseng theft occurs because these thefts are difficult to identify, they believe it occurs frequently. One official said he was concerned that such thefts could substantially reduce ginseng populations on federal lands, which could in turn lead to listing of the plant as threatened or endangered under the Endangered Species Act. Illegal hunting of bear, elk, waterfowl, and other wildlife and illegal fishing are common on federal lands. Hunting and fishing restrictions are typically designed to achieve desired population levels of the animals, and illegal hunting and fishing can reduce the population below desired levels. It can also decrease the likelihood of success for people who are hunting or fishing legally and, in some cases, can result in the closure of an area. Everglades National Park officials, for example, told us that they closed part of the park to all public access because of illegal hunting of American crocodiles, designated as threatened in Florida under the Endangered Species Act, and officials at the Cherokee National Forest said that the state of Tennessee has closed several areas in the forest to hunting to make it harder to illegally hunt black bears. Archaeological artifacts have been stolen and sites vandalized on federal lands. Officials admitted that they do not know the extent of the problem, in part because many archaeological sites are undocumented and others are in remote areas where monitoring is difficult. In some cases, the damage from any one incident may be small, but officials said that the cumulative effect can diminish the site for future visitors and sometimes compromise scientific understanding. In addition, officials identified theft of significant artifacts, including the systematic looting of archaeological sites, as an important concern. For example, Bureau of Land Management officials reported that a 2009 investigation into the theft and trafficking of more than 250 Indian artifacts, valued at more than $330,000, from tribal and federal lands in the Southwest—the largest such case in the United States—led to the indictment of 28 people and nine felony convictions as of October 2010 and that additional indictments are expected. Some of the artifacts stolen, and later recovered by law enforcement officers during this investigation, included burial and ceremonial masks, pottery, and a buffalo headdress. Archaeological sites can also be vandalized: for example, several Indian pictographs have been vandalized at Arches and Canyonlands national parks in Utah. Historical artifacts have also been stolen or damaged on federal lands. Theft of Civil War artifacts is a major concern at Fredericksburg and Spotsylvania National Military Park in Virginia, according to agency officials. About 200 artifacts were stolen in 2007, for example, causing an estimated $57,000 in damaged or lost resources (see fig. 4). Moreover, officials said that historical resources such as Civil War earthworks or trenches have been damaged by unauthorized activities, including climbing or walking on them, riding on or over them with bicycles and OHVs, and unauthorized development on adjacent properties. Officials at the Fish and Wildlife Service’s Detroit Lakes Wetland Management District in Minnesota reported that some property owners violate the conditions of minimally restrictive easements purchased by the federal government to protect wetlands and grasslands—actions that hinder the agency’s efforts to protect breeding habitat for more than 60 percent of key migratory bird species in the United States. These easements are managed by the Fish and Wildlife Service to provide habitat for migratory birds, particularly waterfowl, in the Prairie Pothole Region of the north-central United States. Fish and Wildlife Service officials reported that some property owners have drained protected wetlands to expand their land under cultivation or have grazed livestock on protected grasslands during migratory birds’ nesting periods. Government and private property can be stolen or damaged on federal lands. Theft or damage of government property, such as equipment, road signs, gates, and structures, can result in costs to the agencies and detract from the public’s experience, for example, when restrooms or information kiosks are vandalized. Similarly, theft or damage of private property—for example, when valuables are stolen from parked vehicles—can impose costs on the visiting public. According to officials at several federal land units we visited and National Drug Intelligence Center reports, marijuana is increasingly grown on federal lands. Law enforcement officials told us that although most such marijuana cultivation has historically occurred on the West Coast, intensive cultivation—in many cases by large-scale international drug- trafficking organizations—has spread to other regions of the country in recent years. The National Drug Intelligence Center reported that more than 4 million plants were eradicated from federal lands in 2008—about half of all outdoor-grown marijuana eradicated in the United States. Marijuana cultivation on federal lands not only increases the availability of illegal drugs but also harms ecosystems, according to the federal land managers we spoke with. Specifically, these officials identified the following resource impacts of marijuana cultivation on federal lands: removal of natural vegetation and the application of pesticides, herbicides, fertilizers, and other chemicals, including chemicals that may be banned in the United States; diversion of water from streams, which has reduced downstream waterflows and has harmed fish and amphibians; killing of wildlife, including bear and deer, to keep the animals from eating or trampling marijuana plants or to supplement growers’ food stocks; deposits of large amounts of trash and human waste; and setting of wildland fires, either intentionally or accidentally, which have also degraded the natural resources on federal lands. Cleaning up cultivation sites is important, not only to restore damaged areas, but also to make it less likely that growers will return, agency officials told us. In 2008, the National Park Service restored 14 marijuana cultivation sites in its Pacific West Region. To clean up these sites, the National Park Service removed more than 10 miles of irrigation hose, about 10,000 pounds of trash, and more than 3,700 pounds of fertilizer, as well as pounds of hazardous chemicals such as pesticides (see fig. 5). Cleaning up marijuana cultivation sites costs an estimated $10,000 to $15,000 an acre and reduces the agencies’ ability to accomplish other planned work, according to agency officials. Moreover, marijuana growers are typically armed, posing a threat to public safety and agency employees, according to agency law enforcement officials. Hunters, hikers, and other members of the public, as well as agency employees, have been shot, shot at, kidnapped, and threatened with violence. Although such violent encounters are rare, law enforcement officials at several units we visited said that marijuana growers have become more violent in recent years. Law enforcement officials also said that the public is increasingly aware of the danger and that some people avoid areas where marijuana cultivation is likely. In some areas, the threat posed by marijuana growers has also affected the agencies’ ability to work in remote areas. A regional Forest Service law enforcement official in California told us that the agency had to remove three crews of wildland firefighters during an 8-week period in 2009 because of encounters with marijuana growers. Law enforcement officials told us that some remote federal lands along the U.S. border are often used to smuggle drugs or humans into the country. According to these officials, such illegal activities can damage sensitive wildlife habitat and threaten public safety. Officials at every unit we visited in Arizona reported substantial natural resource damage from illegal border activity (see fig. 6). In 2006, for example, the Refuge Manager of Buenos Aires National Wildlife Refuge testified before the House of Representative’s Subcommittee on Interior, Environment, and Related Agencies of the Committee on Appropriations that an estimated 235,000 people entered the United States illegally across refuge lands in 2005. He reported that illegal border crossers had disturbed wildlife and created more than 1,300 miles of illegal trails, causing the loss of vegetation and severe erosion. He also estimated that each year illegal border crossers leave more than 500 tons of trash and more than 100 abandoned vehicles on the refuge. Further, officials at several units we visited reported that illegal border crossers have started wildland fires, either by accident (e.g., from a cooking fire that escaped) or on purpose (e.g., to divert law enforcement resources away from certain areas). Officials at Buenos Aires National Wildlife Refuge told us that illegal border activity was damaging sensitive desert ecosystems—including habitat for several threatened or endangered species, such as the masked bobwhite quail and Sonoran pronghorn—although the officials were unable to quantify the effects of illegal activity on these populations. Illegal border activities also affect the safety of the public and agency employees. For example, officials at the three units we visited in Arizona—Buenos Aires National Wildlife Refuge, Coronado National Forest, and Sonoran Desert National Monument—observed that smugglers are often armed and pose a risk to public and employee safety. The officials said that, while few violent encounters between smugglers and the public have occurred to date, many illegal immigrants or smugglers have been murdered or raped on federal lands. Officials also reported that illegal border crossers have stolen vehicles (both private and government owned), broken into agency employee housing, and stolen food and water. Officials also said that visitors to federal lands in these areas are concerned about their safety and that some visitors have said they no longer go to certain areas because of the illegal activities. In some cases, the agencies have determined that the risk to public safety is high enough to warrant closing areas to public use. Buenos Aires National Wildlife Refuge, for example, has closed a portion of the refuge adjacent to the border to reduce the risk to the public. Similarly, the National Park Service closed most of Organ Pipe Cactus National Monument, a popular location for bird-watching, after a park law enforcement officer was murdered in 2002 by a member of a drug-trafficking organization. According to law enforcement officials at the units we visited, the public and agency employees can also be the victims of violence, including assault, rape, and homicide, on federal lands. Although land management officials stressed that this kind of violence remains rare, several units we visited reported some violent incidents. For example, Ocala National Forest officials reported that two college students were murdered in the forest in 2006. Similarly, Bureau of Land Management officials in California reported examples of violence, including rape and severe assaults, in the Imperial Sand Dunes Recreation Area—a popular OHV location that can attract 150,000 or more people on holiday weekends. Agency employees, including law enforcement officers, may also fall victim to violence. For example, a Forest Service law enforcement officer in Washington was murdered during a traffic stop in 2008. Beyond the immediate impact on victims, some officials told us, such violent crimes also have an effect on the public because after such incidents happen, the public is more likely to avoid areas they suspect may be prone to violence. In recent years, federal land management agencies have responded to illegal activities occurring on federal lands in several ways. They have generally increased the number of law enforcement officers, directed officers to respond to marijuana cultivation and illegal border activities, assigned officers temporarily to areas needing a greater law enforcement presence, and increased the training required for new law enforcement officers. In response to illegal activities occurring on federal lands, three of the four agencies have increased the number of their permanent law enforcement officers in recent years (see table 1). For example, the Bureau of Land Management has increased the number of its permanent law enforcement officers by about 40 percent since fiscal year 2000, and the Forest Service increased the number of its officers by almost 18 percent over the same period. Similarly, since fiscal year 2006, the Fish and Wildlife Service increased by about 26 percent the number of its permanent officers performing law enforcement duties on a full-time basis. The National Park Service, in contrast, decreased its permanent law enforcement officers by more than 12 percent since fiscal year 2005, although the agency partially compensated for this loss by increasing the number of law enforcement officers it hired on a seasonal, rather than permanent, basis. At the Fish and Wildlife Service, however, the potential benefits of the overall increase in the number of law enforcement officers may have been partially offset: Although the Fish and Wildlife Service substantially increased the number of its full-time law enforcement officers, it also reduced the number of part-time officers by more than 34 percent. According to the Chief of the Division of Refuge Law Enforcement, this reduction came in response to a 2002 review by Interior’s Office of Inspector General, which reported that law enforcement on federal lands was becoming more dangerous and raised concerns about the safety of using part-time law enforcement officers. In response to the Inspector General’s concern, the refuge law enforcement division chief told us, the agency made a concerted effort to reduce the number of part-time officers and also required all of its part-time law enforcement officers to spend at least 25 percent of their time performing law enforcement duties. Still, the refuge law enforcement division chief recognized that the reduction in part-time officers meant the loss of a number of officers who, in past years, would have been available to respond to illegal activities. Although the National Park Service, in contrast to the other agencies, decreased the number of its permanent law enforcement officers, this decline has been accompanied by about a 25 percent increase since 2006 in the number of officers employed on a seasonal basis. The National Park Service uses seasonal officers—those employed for less than 6 months per year—to respond to seasonal changes in national park visitation. National Park Service officials reported that seasonal officers do not receive the same training as permanent officers. Moreover, echoing concerns it raised about the use of part-time officers, Interior’s Inspector General also raised concerns about the use of seasonal officers, recommending that the Interior agencies also reduce their dependence on such officers. A senior National Park Service official told us that the agency recognizes the Inspector General’s concerns about using seasonal officers, but that units with large seasonal variations in visitation may not have sufficient work to warrant hiring additional permanent officers. Despite the general increase in the agencies’ law enforcement staffing, agency officials at several units we visited said that law enforcement resources in some areas have remained thin. For example, in southeastern Utah, one Bureau of Land Management officer is responsible for patrolling about 1.8 million acres of land rich in archaeological resources—including lands from which archaeological artifacts have been stolen in recent years. According to this officer, when she has been on leave, at training, or temporarily assigned to assist other units, the area has been left without law enforcement coverage. Likewise, Fish and Wildlife Service officials told us that the Merritt Island National Wildlife Refuge Complex—which includes six refuges spread across five counties—has had 2 full-time officers and 2 part-time officers. As a result, the officials said, some of the refuges have little to no regular law enforcement coverage. Similarly, the Chesapeake Marshlands National Wildlife Refuge Complex—which includes four refuges in Maryland and Virginia—has had 1 full-time officer and 1 part-time officer. Additionally, a Forest Service official said that there were 12 law enforcement officers to patrol three national forests in southwestern Colorado, totaling about 7.5 million acres, and that certain areas of those forests are rarely patrolled by law enforcement officers. Agency documents indicate that the agencies have directed additional law enforcement resources to certain areas of the country in a specific effort to deter cultivation of marijuana on federal lands and illegal activities occurring on federal lands along the United States-Mexico border. Agency law enforcement officials told us that the agencies have placed high priority on distributing law enforcement resources to areas where these illegal activities are most prevalent—in part responding to direction from congressional committees and to the high risk posed by these activities to visitors, employees, and resources. To deter marijuana cultivation on federal lands, for example, the agencies have taken numerous steps, including the following: Interior began its marijuana eradication initiative in fiscal year 2009, intended to provide a coordinated, interagency strategy involving Interior and its bureaus, the Forest Service, and other federal law enforcement agencies to improve eradication of marijuana and drug interdiction and to measurably increase the protection of public lands, employees, and visitors. The Bureau of Land Management reported using $5.1 million in fiscal year 2009 to hire 10 more law enforcement officers in six western states; fund marijuana detection, investigation, and eradication operations on its lands; purchase and upgrade communications and law enforcement equipment; fund cooperative agreements with state and local law enforcement agencies; and rehabilitate and restore former cultivation sites. The Forest Service reported that it hired 29 law enforcement officers in California, using a portion of $12 million appropriated in fiscal year 2007 for a nationwide initiative to increase protection of national forest lands from drug-trafficking organizations. The National Park Service reported that it directed about $2.7 million to several national parks in California and Washington to help the parks respond to marijuana cultivation in fiscal year 2009; similarly, the agency reported directing $448,000 to Sequoia and Kings Canyon national parks and $316,000 to Whiskeytown National Recreation Area in California in fiscal year 2006. The National Park Service also reported that in fiscal year 2009 it created a marijuana investigation and response team, which the agency deploys to carry out marijuana prevention, detection, eradication, and restoration operations in park units affected by marijuana cultivation. For example, according to the National Park Service’s Chief Ranger of the Pacific West Region, officers from the team; the Forest Service; and 14 other federal, state, and local law enforcement agencies jointly conducted Operation Save Our Sierra in 2009. This operation eradicated more than 400,000 marijuana plants from 71 cultivation sites across Fresno County, California. The agencies have also directed resources to deter illegal activity along the United States-Mexico border. For example: In fiscal year 2009, Interior established its Safe Borderlands initiative, intended to “provide a safe environment for people and protect resources through the focused deployment of personnel, restoration of ecosystems, and integrated partnerships along the southwest border.” The Fish and Wildlife Service reported that it added six new law enforcement officers to four refuges along the border in 2009. The Bureau of Land Management reported that in 2008 it hired nine law enforcement officers in Arizona, California, and New Mexico. In fiscal year 2009, the agency also directed $350,000 to purchase new radios for law enforcement officers working along the border. In 2007, the Forest Service added eight law enforcement officers at the Coronado National Forest to deter illegal cross-border activity, according to agency officials. The National Park Service reported that it constructed a vehicle barrier along the border at Organ Pipe Cactus National Monument in response to direction in committee reports accompanying the agency’s fiscal year 2003, 2004, and 2005 appropriations. The agency also reported that in recent years it added more than 30 law enforcement officers to five parks along the border in Arizona and Texas. The agencies have also temporarily assigned, or detailed, law enforcement officers to areas where more officers have been needed to anticipate increases in visitation, carry out planned operations such as patrolling the border or eradicating marijuana, or assist other law enforcement agencies outside federal lands. For example, Bureau of Land Management officials told us, 40 officers are detailed to the Imperial Sand Dunes Recreation Area on four major holiday weekends each year to protect resources and ensure visitor safety during large gatherings of OHV enthusiasts. Similarly, officials at the Okanogan-Wenatchee National Forest in Washington and Merritt Island National Wildlife Refuge told us that detailees have been used during hunting seasons and large fishing tournaments to discourage hunting and fishing violations. National Park Service officials reported that the agency temporarily deployed 7 to 11 officers on multiple occasions to Organ Pipe Cactus National Monument to assist with the interdiction of drug and human smuggling. In addition, Bureau of Land Management officials told us that the agency has identified officers with expertise in marijuana investigations and organized them into regional pools to provide additional investigative support on a case-by-case basis in areas where significant marijuana cultivation sites have been discovered. Headquarters officials for all four agencies said that temporarily detailing staff allows them to augment their law enforcement presence when and where needed, but they also said they recognized that doing so reduces the law enforcement presence at other locations. To better prepare their law enforcement officers to respond safely to illegal activities occurring on federal lands, the agencies have increased the training new officers are required to complete. Specifically, each of the agencies now require new law enforcement officers to complete similar three-part training curriculums. First, new officers are required to pass the land management police training program, a 16-week course developed in 2005 by the Federal Law Enforcement Training Center in conjunction with federal land management agencies. A description of the training indicates that the course covers law enforcement skills and knowledge that officers for all federal land management agencies need to perform their duties effectively. Second, the agencies require new officers to receive training about the laws, regulations, and policies specific to each agency. The Interior agencies have established 1- to 3-week classroom courses covering agency-specific information, and the Forest Service has integrated this information into its field officer–training program. Third, the agencies have established field officer–training programs, varying in length from 9 to 12 weeks, which allow new officers to apply the knowledge and skills learned in the classroom to law enforcement duties in the field under the supervision of experienced officers. The land management police-training and field officer–training programs were established over the past decade, in part in response to shortcomings identified by Interior’s Inspector General. Law enforcement officials at most federal land units we visited said that the training required for new officers generally prepared them well for performing their duties effectively and safely. Some officials at units we visited also said that responding to marijuana cultivation and illegal border activities pose certain risks and that additional specialized training would help officers better respond to those activities. The Forest Service requires its law enforcement officers to complete a 2-week course on drug enforcement before they are allowed to do substantial work investigating drug-trafficking operations. This course trains officers to identify marijuana cultivation sites, understand the hazards of investigating these sites, and practice special surveillance and tactics. Law enforcement officers at one national forest we visited said that although this training was useful, more emphasis on special tactics would improve the effectiveness and safety of marijuana eradication operations. In contrast, the Interior agencies do not require officers to complete specialized drug enforcement training. Bureau of Land Management law enforcement officers in California said that more tactical training would help them better respond to the challenges posed by drug- trafficking organizations. Similarly, a Bureau of Land Management law enforcement officer in Arizona said that additional tactical training would help officers better respond to illegal border activities. A senior law enforcement official for the Bureau of Land Management told us that the agency recognizes the need for additional tactical training for law enforcement personnel who respond to these types of illegal activities and plans to incorporate 8 hours of such training into its 2011 training curriculum. A National Park Service official also told us the agency plans to hold a 2-week course in 2011 on special operations and tactics for law enforcement officers who work along the border. Although land management agencies consider varied information on the occurrence and effects of illegal activities on federal lands, the agencies do not systematically assess the risks posed by such activities when determining their needs for resources and where to distribute them. Because of limitations in the information they consider, officials cannot fully assess either the magnitude of the risks posed by illegal activities or the likelihood of their occurrence. As a result, when making decisions about needed law enforcement resources and how to distribute those resources, the agencies cannot systematically assess the relative risks faced by the hundreds of individual land management units across the country. To better achieve their missions and improve accountability, federal agencies are required to employ certain internal controls, including assessing the risks agencies face from both external and internal sources. Applying the federal risk assessment standard to illegal activities occurring on federal lands therefore suggests that—to respond effectively to these activities and reduce their effect on natural and cultural resources, the public, and agency employees—land management agencies should, at a minimum, (1) comprehensively identify the risks posed by illegal activities on their lands, (2) assess identified risks to determine their magnitude and the likelihood of their occurrence, and (3) use information from these assessments in determining the law enforcement resources they need and how to distribute those resources. The risk assessment standard recognizes that the specific risk analysis methodology used can vary by agency because of differences in agencies’ missions and the difficulty in qualitatively and quantitatively assigning risk levels. Nevertheless, without a systematic process that incorporates all of these elements, the agencies may have limited assurance that they are using their law enforcement resources in a manner that effectively addresses the risk of illegal activities, and they are limited in their ability to meet the federal risk assessment standard. In determining their law enforcement resource needs and how to distribute these resources, law enforcement officials told us they consider various types of information on the occurrence and effects of illegal activities on their federal land units. Because of limitations in the information they consider, however, land management agency officials are unable to fully assess either the magnitude of the risks related to illegal activities on federal lands or the likelihood of their occurrence. Moreover, law enforcement officials identified various approaches that their respective agencies use to determine resource needs, but limitations in these approaches also hinder the agencies’ ability to systematically assess the relative risks faced by the hundreds of individual land management units across the country or the agencies as a whole. According to law enforcement officials and land managers we spoke with, they consider the available information on the occurrence and effects of illegal activities on federal lands and use various approaches in managing their law enforcement resources, including the following: Incident data on illegal activities occurring on federal lands. Land management agencies maintain some data on law enforcement incidents, including the type of crime, characteristics of victims and offenders, and types and value of resources or property damaged or stolen. Incident data allow officials at a unit, regional or state office, or headquarters to identify different types of illegal activities occurring on particular federal lands. But, as discussed earlier, the incident data the agencies rely on are limited for a variety of reasons and cannot be used to accurately indicate or monitor the trends in occurrence of illegal activities on federal lands. Information on the effects of illegal activities. Agencies collect some information on the effects of illegal activities on natural and cultural resources and on public and employee safety. As mentioned earlier, at several units we visited, officials said they had documented damage to specific locations from dumping of trash and hazardous materials, marijuana cultivation, timber theft, and unauthorized OHV use. But according to agency officials, information on effects is not systematically collected and is instead collected mainly for specific reasons, as when it is needed as evidence in criminal investigations. As a result, the agencies generally lack consistent quantitative or qualitative information on the effects of illegal activities. Senior agency law enforcement officials said that while available information—such as quantities of trash dumped or acres of vegetation damaged to cultivate marijuana—helps them understand the effects of illegal activities on resources at specific locations, they do not believe it is feasible to quantify the effects of all illegal activities across the country. Law enforcement plans for individual units and for regions or states. Two agencies—the Bureau of Land Management and the Forest Service— require their units and their state or regional offices to develop law enforcement plans. For example, the Bureau of Land Management manual, which contains policy and program direction, directs the agency’s state offices to develop law enforcement plans annually and says that plans are to identify and rank (1) the most pressing law enforcement issues facing units in that state, (2) specific agency lands that are most important to protect, and (3) locations needing additional law enforcement officers. Most of the 10 state office plans we reviewed contained these elements, although some lacked critical components. For example, the plan for the Bureau of Land Management’s Arizona State Office lists the illegal activities identified as important by each field office in the state, but the plan neither identifies the activities most important statewide, nor ranks those activities according to importance. Even in cases where state offices have identified and ranked the most pressing law enforcement issues and lands to protect, the plans provide little information on the frequency or effects of illegal activities; nor do they identify lands where illegal activities are most likely to occur. In addition, a senior Bureau of Land Management law enforcement official reported that at least two state offices—including California, the state office with the largest law enforcement program in the agency—have not updated their plans in more than 5 years. We found a similar variety in the content of law enforcement plans developed by Forest Service regional offices. For example, the plan for the Rocky Mountain Region identified three issues—motorized and nonmotorized vehicle use, including OHVs; unauthorized commercial activities, including guided hunting, rafting, and sightseeing trips, and other recreational activities; and theft of timber and other forest products—as the biggest challenges to its law enforcement program. The plan detailed the nature and scale of the risks posed by these activities, locations at greatest risk, and strategies to mitigate those risks. In contrast, the plan for the Forest Service’s Eastern Region identified 11 illegal activities as the most important regionwide, but provided little information on the magnitude of the activities’ effects, the locations most affected, or law enforcement strategies the region could use to mitigate those effects. Moreover, according to the Forest Service’s Director of Law Enforcement and Investigations, two regions—the Pacific Northwest and Southern—have not developed regionwide law enforcement plans; rather, the plans for these two regions simply compile the plans for each forest in the region. As a result, the plans identify neither regional priorities nor strategies for how to use law enforcement resources to respond to those priorities. Risk assessments for specific issues. In some cases, the agencies have undertaken efforts to assess risks arising from certain types of illegal activities, such as illegal border activities or cultivation of marijuana on federal lands. For example, a recent National Park Service assessment found that marijuana cultivation has led to significant degradation of natural resources, including removal of trees and vegetation, introduction of nonnative and invasive species, pollution from the extensive use of pesticides and fertilizers, alteration of streambeds, and poaching of wildlife. Similarly, in 2003, Interior, in conjunction with some of its component agencies, assessed the risks facing units along the U.S. border with Mexico. This assessment identified different risks, ranging from dumping of trash to violence against the public or law enforcement officers to international terrorism—illegal activities that all posed risks to natural resources, the public, and agency employees along the border. In addition, in 2007 and 2008, the National Park Service’s Intermountain Region completed similar assessments for five national parks along the border in Arizona and Texas. The agency reported that on the basis of these assessments, it added more than 30 law enforcement officers to the five parks and constructed new infrastructure, such as fences and vehicle barriers along the borders, to deter illegal entry. But these assessments provide no information on the importance of the risks from the assessed activities relative to the risks posed by other illegal activities. As a result, individual assessments like these cannot help officials determine which illegal activities pose the greatest risks to resources, the public, and agency employees or help them identify which units are in greatest need of more law enforcement resources. Formal decision-support tools. In an effort to help them more systematically analyze their law enforcement programs, two of the agencies—the Fish and Wildlife Service and the National Park Service— have developed decision-support tools that estimate the number of law enforcement resources needed at individual units. These tools incorporate a number of variables, such as geographic characteristics, sensitive natural and cultural resources, and visitation patterns, when analyzing law enforcement needs for a refuge or park. Nevertheless, we identified a number of shortcomings with these tools that limit their effectiveness in assessing the relative risks of illegal activities. For example, the Fish and Wildlife Service has used a staffing deployment model developed for it in 2005 by the International Association of Chiefs of Police to help determine its overall staffing needs and to assign new law enforcement officers to specific refuges. Despite initial plans to integrate risk assessments of certain illegal activities for each refuge into the model, the assessments were never conducted and were not included in the model’s final analysis. The Chief of the Division of Refuge Law Enforcement said the agency would like to update the model to account for the expansion of the refuge system and to reevaluate the weights placed on the variables included in the model—as well as to include the risk assessment components omitted from the initial analysis—but he said the agency had no specific plans to do so. Similarly, the National Park Service has used its staffing model to help officials determine law enforcement resource needs. However, Interior’s Inspector General has criticized the model because it has never been validated, its methodology has not been supported, and there is no certainty that its main assumptions are correct. Law enforcement officials at several national parks we visited told us that they did not believe the model accurately estimated the number of officers a particular unit needed. Senior National Park Service law enforcement officials told us they recognized the model’s shortcomings and were evaluating options for improving it. Without consistent information on the relative risks illegal activities pose to resources, the public, and agency employees at federal land units across the country, or a systematic approach to use this information to make decisions about how law enforcement resources should be distributed, the agencies have limited assurance that they are accurately determining their law enforcement needs and distributing their law enforcement resources effectively. As stated earlier, the land management agencies should, at a minimum, (1) comprehensively identify the risks posed by illegal activities on their lands, (2) assess identified risks to determine their magnitude and the likelihood of their occurrence, and (3) use this information in determining the law enforcement resources they need and how to distribute those resources. Without such information and processes, the agencies are not adhering to federal internal control standards. As a result, land management agencies may not be able to ensure that their current decisions on allocating law enforcement resources are effective, nor can they know whether resources would be more effective if distributed to different units or, if additional resources are needed, where these new resources should go. Senior law enforcement officials at each agency told us they believed that a more systematic approach to assessing risks would help the agencies make more-informed decisions about law enforcement resources. They said such an approach would also help them better explain their law enforcement resource allocation decisions, both within their law enforcement programs—so that officials in the field understood why some units gained law enforcement staff while others stayed the same or declined—and to outside parties, including overall agency leadership. In 2009, we recommended that the National Park Service develop such an approach—specifically that it develop a more comprehensive, routine risk management approach for security. In response to our recommendation, the National Park Service has taken and continues to take actions—such as improving protective infrastructure and surveillance equipment— designed to reduce the risks to historical structures and the public at the five units that have been designated as national icons. The agency has taken few steps, however, to identify and reduce risks to the other units of the National Park System. In an environment of constrained budgets, land management agencies will likely continue to face challenges in protecting natural and cultural resources, the public, and agency employees from the effects of illegal activities on federal lands. The limitations of available information on illegal activities on federal lands, and the agencies’ lack of systematic approaches to identifying law enforcement resource needs and distributing those resources, hamper the agencies’ efforts to target their resources effectively. Without a more systematic method to assess the risks posed by illegal activities and a stronger framework for managing them, the agencies cannot be assured that they are allocating scarce resources in a manner that effectively addresses the risk of illegal activities on our nation’s federal lands. To help the agencies identify the law enforcement resources they need and how to distribute these resources effectively, we recommend that the Secretaries of Agriculture and the Interior direct the Chief of the Forest Service and the Directors of the Bureau of Land Management, Fish and Wildlife Service, and National Park Service, respectively, to each take the following action: Adopt a risk management approach to systematically assess and address threats and vulnerabilities presented by illegal activities on federal lands. The approach can vary among the agencies but should be consistent within each agency and should include (1) conducting periodic risk assessments to identify and rank threats and assess agency vulnerabilities and (2) establishing a structured process for using the results of these assessments to set priorities for and distribute law enforcement resources to best protect natural and cultural resources, as well as public and agency employee safety. In developing a risk management approach, the agencies should consider conducting the risk assessments at regional or state levels and using those assessments to inform decisions about law enforcement resource needs and how to distribute those resources across the country. We provided a draft of this report for review and comment to the Departments of Agriculture and the Interior. The Forest Service, responding on behalf of Agriculture, agreed with our report’s findings and recommendation; the agency’s written comments are reprinted in appendix II. Interior—in an e-mail through its liaison to GAO on November 15, 2010—agreed with our report’s recommendation and also provided technical comments, which we incorporated into the report as appropriate. In its written comments, the Forest Service stated that it is developing a template for its regional offices to use in preparing annual regional law enforcement plans that will assist the agency in setting priorities for allocating law enforcement resources. We commend the agency for taking this action and believe that such a template has the potential to improve the consistency of information available to senior agency leaders making decisions about law enforcement resources. However, it is unclear from the agency’s written response whether the template it is developing incorporates risk management elements. As our report notes, an effective risk management approach would include (1) comprehensively identifying the risks posed by illegal activities on federal lands, (2) assessing identified risks to determine their magnitude and the likelihood of their occurrence, and (3) using this information in determining the law enforcement resources the agencies need and how to distribute those resources. We are sending copies of this report to the appropriate congressional committees; Secretaries of Agriculture and the Interior; Chief of the Forest Service; Directors of the Bureau of Land Management, Fish and Wildlife Service, and National Park Service; and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of our review were to determine (1) the types of illegal activities occurring on federal lands and the effects of those activities on natural and cultural resources, the public, and agency staff; (2) how the agencies have used their law enforcement resources to respond to these illegal activities; and (3) how the agencies determine their law enforcement resource needs and distribute these resources. To determine the types of illegal activities occurring on federal lands, we reviewed documents and interviewed officials from the headquarters and regional or state offices of four federal land management agencies: the Forest Service in the Department of Agriculture and the Bureau of Land Management, Fish and Wildlife Service, and National Park Service in the Department of the Interior. We also collected and analyzed agency data on the recorded frequency of different types of illegal activities. Using this information, we identified about 20 categories of illegal activities occurring on federal lands and interviewed agency officials at headquarters and at regional and state offices to corroborate and refine these categories. To determine the occurrence of different types of illegal activities in different areas of the country, we interviewed agency law enforcement officials at headquarters and in each regional or state office and, using a standardized set of questions, asked them to identify which types of illegal activities placed the greatest demands on their law enforcement resources. To determine the effects of illegal activities on natural and cultural resources, the public, and agency staff, we interviewed agency officials at headquarters and selected units, who described the effects that can result from different types of illegal activities. Because the agencies lack nationwide information on these effects, and to better understand any regional or agency variation in the occurrence and effects of different types of illegal activities, we visited or contacted 26 selected agency units in eight geographic areas throughout the United States (see table 2). Units were selected on the basis of our interviews with regional and state office officials and to broadly represent the types of illegal activities occurring on federal lands. For each unit, we (1) reviewed documents, including assessments or reports describing the effects of illegal activities; (2) interviewed law enforcement and, at some units, land management officials about the occurrence and effects of illegal activities; and (3) observed locations in the field that have been damaged by illegal activities. To determine how the agencies have used their law enforcement resources to respond to illegal activities, we analyzed available data on law enforcement staffing for each agency. We assessed the reliability of each agency’s data and, on the basis of our audit objectives, determined that the data were sufficiently reliable to report. In addition, we reviewed congressional appropriations to the agencies for responding to specific types of illegal activities, such as illegal crossings of the U.S. border with Mexico or marijuana production on federal lands; congressional committee direction to the agencies to direct law enforcement resources toward responding to specific illegal activities; and agency documents describing how they used law enforcement resources to respond to these specific activities. We also reviewed agency guidance, analyzed available data, and interviewed agency officials at headquarters and selected units to determine how the agencies temporarily assign staff to areas needing additional law enforcement resources. Finally, we reviewed agency documentation on training requirements for law enforcement officers and interviewed agency officials at headquarters and the units we visited to obtain their perspectives on the sufficiency of training in preparing officers to respond effectively and safely to illegal activities. To determine how land management agencies identify their law enforcement resource needs and distribute those resources, we asked agency law enforcement officials at headquarters and at regional or state offices to identify the information they consider and the processes they use to make law enforcement staffing decisions. To identify federal requirements and best practices for incorporating risk management into agency decision making, we reviewed relevant guidance, including GAO’s Standards for Internal Control in the Federal Government, as well as other GAO reports on using risk management to inform agency decisions about how to distribute agency resources. To evaluate the extent to which the agencies met risk management requirements and incorporated best practices, we reviewed examples of the types of information officials consider in making resource decisions, including (1) agency data on the occurrence of illegal activities; (2) agency information on the effects of illegal activities on natural and cultural resources, the public, and agency staff; (3) agency law enforcement plans for individual units and regions or states; (4) risk assessments the agencies have conducted for specific types of illegal activities; and (5) descriptions of formal decision-support tools some of the agencies use to analyze their resource needs, examples of how these tools have been used to inform decision making, and available assessments of these tools. To obtain their perspectives on information and processes used to determine their resource needs and distribution, we also interviewed agency officials at headquarters, at regional or state offices, and at the units we visited. We conducted this performance audit from July 2009 through December 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact person named above, David P. Bixler, Assistant Director; Ellen W. Chu; Jonathan Dent; Christy Feehan; Alma Laris; Michael Lenington; Micah McMillan; Rebecca Shea; and Jeanette Soares made key contributions to this report. Also contributing to this report were Melinda L. Cordero, Richard M. Stana, and Kiki Theodoropoulos. | Four federal agencies--the Forest Service in the Department of Agriculture and the Bureau of Land Management, Fish and Wildlife Service, and National Park Service in the Department of the Interior--are responsible for managing federal lands, enforcing federal laws governing the lands and their resources, and ensuring visitor safety. Illegal activities occurring on these lands have raised concerns that the four agencies are becoming less able to protect our natural and cultural resources and ensure public safety. GAO examined (1) the types of illegal activities occurring on federal lands and the effects of those activities on natural and cultural resources, the public, and agency employees; (2) how the agencies have used their law enforcement resources to respond to these illegal activities; and (3) how the agencies determine their law enforcement resource needs and distribute these resources. GAO reviewed agency documents, interviewed agency officials, and visited or contacted 26 selected agency units. A wide variety of illegal activities occurs on federal lands, damaging natural and cultural resources and threatening the safety of the public and agency employees. These activities can range from traffic violations to theft of natural and cultural resources to violent crimes. The frequency with which these illegal activities occur is unknown, as agency data do not fully capture the occurrence of such activities; similarly, the extent of resource damage and threats to public and agency employee safety is also unknown. These activities can have overlapping effects on natural, cultural, and historical resources; public access and safety; and the safety of agency employees. For example, illegal hunting results in the loss of wildlife and may also reduce opportunities for legal hunting. Also, cultivation of marijuana not only increases the availability of illegal drugs but fouls ecosystems and can endanger public and agency employee safety. And theft or vandalism of archaeological and paleontological resources can result in the loss or destruction of irreplaceable artifacts, diminishing sites for future visitors and depriving scientists of important sources of knowledge. In response to illegal activities occurring on federal lands, agencies have taken a number of actions. For example, three of the four agencies have increased their number of permanent law enforcement officers in recent years. The Bureau of Land Management increased its number of law enforcement officers by about 40 percent since fiscal year 2000, the Forest Service by almost 18 percent during the same period, and the Fish and Wildlife Service by about 26 percent since fiscal year 2006. The agencies have also directed officers to respond specifically to marijuana cultivation and illegal border activities, assigned officers temporarily to areas needing a greater law enforcement presence during certain events and law enforcement operations, and increased the training required for new officers. Although land management agencies consider varied information on the occurrence and effects of illegal activities on federal lands, the agencies do not systematically assess the risks posed by such activities when determining their needs for resources and where to distribute them. While available information helps the agencies to identify many of the risks that illegal activities pose to natural and cultural resources, the public, and agency employees, limitations in this information do not allow officials to fully assess either the magnitude of those risks or the likelihood of their occurrence. As a result, the agencies cannot systematically assess the relative risks faced by the hundreds of individual land management units across the country when making decisions about needed law enforcement resources and how to distribute those resources. Without systematic approaches to assess the risks they face, the agencies may have limited assurance that they are allocating scarce resources in a manner that effectively addresses the risk of illegal activities on our nation's federal lands. GAO recommends that the agencies adopt a risk management approach to systematically assess and address threats and vulnerabilities presented by illegal activities on federal lands. In commenting on a draft of this report, the Forest Service and Interior concurred with GAO's recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The loss of lives and property resulting from commercial motor vehicle accidents has been a focus of public concern for several years. In 2006, about 5,300 people died as a result of crashes involving large commercial trucks or buses, and about 126,000 more were injured. A recent study performed by DOT showed that a significant number of commercial driver crashes were due to a physical impairment of the driver. Specifically, DOT found that about 12 percent of the crashes where the crash cause could be identified were due to drivers falling asleep, being disabled by a heart attack or seizure, or other physical impairments. The Federal Motor Carrier Safety Administration (FMCSA) within DOT shoulders the primary federal responsibility for reducing crashes, injuries, and fatalities involving large trucks and buses. FMCSA’s primary means of preventing these crashes is to develop and enforce regulations to help ensure that drivers and motor carriers are operating in a safe manner. FMCSA’s regulations, among other things, require that drivers of commercial motor vehicles are 21 years old, can read and speak the English language, have a current and valid commercial motor vehicle operator’s license, have successfully completed a driver’s road test, and are physically qualified to drive. As part of these regulations, FMCSA established standards for the physical qualifications of commercial drivers, including the requirement of a medical certification from a medical examiner stating that the commercial driver is physically qualified to operate a commercial motor vehicle. See appendix II for a description of the federal medical requirements. The National Transportation Safety Board (NTSB), an independent federal agency that investigates transportation accidents, considers the medical fitness of commercial drivers a major concern. Over the past several years, NTSB has reported on serious flaws in the medical certification process of commercial drivers. NTSB stated that these flaws can lead to increased highway fatalities and injuries for commercial vehicle drivers, their passengers, and the motoring public. In 2001 NTSB recommended eight safety actions to improve the oversight of the medical certification process, in response to a bus crash that killed 22 people in Louisiana. According to NTSB, currently all eight of the recommendations remain open. In response to FMCSA’s failure to adequately address NTSB’s recommendations, NTSB placed the oversight of medical fitness on its “Most Wanted” list in 2003. Table 1 details each of NTSB’s recommendations. Several fatality crashes highlight the need for and the importance of having an effective medical certificate process. For example, In July 2000, a truck collided with a Tennessee Highway Patrol vehicle protecting a highway work zone. The patrol car exploded at impact, killing the state trooper. The driver of the truck had previously been diagnosed with sleep apnea and hypothyroidism, and had a similar crash in 1997, when he struck the rear of a patrol car in Utah. NTSB stated that it believes that if a comprehensive medical oversight program been in place at the time of the accident, this driver, with known and potentially incapacitating medical conditions, would have been less likely to have been operating a commercial vehicle. This accident, the NTSB said, “demonstrates how easily unfit drivers are able to take advantage of the inadequacies of the current medical system, resulting in potentially fatal consequences.” In May 2005, a truck collided with a sports utility vehicle in Kansas killing a mother and her 10-month-old baby. Prior to the accident, a physician diagnosed the truck driver with a severe form of sleep apnea. The truck driver subsequently went to another physician who issued the medical certificate because the driver did not disclose this illness. The truck driver was found guilty of two counts of vehicular manslaughter. In August 2005 in New York, a truck collided with a motor vehicle, killing the occupants. The truck driver admitted to forging a medical certificate required to get his CDL license because he had been diagnosed with a seizure disorder. The truck driver recently pled guilty of two counts of manslaughter. Commercial drivers with serious medical conditions can still meet DOT medical fitness requirements to safely operate a commercial vehicle and thus hold CDLs. However, there is general agreement that careful medical evaluations are necessary to ensure that serious medical conditions do not preclude the safe operation of a commercial vehicle. It is impossible to determine from data analysis which commercial drivers receiving disability benefits have a medical condition that precludes them from safely driving a commercial vehicle because medical determinations are largely based on subjective factors that are not captured in databases. As such our analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. Our analysis of DOT data and disability data from the four selected federal agencies, SSA, VA, OPM, and DOL, found that about 563,000 individuals had been issued CDLs and were receiving full medical disability benefits. This represented over 4 percent of all CDLs in the DOT database. However, because DOT’s database does include drivers that had suspended, revoked, or lapsed licenses, the actual number of active commercial drivers that receive full federal disability benefits cannot be determined. Also, our analysis does not include drivers with severe medical conditions that are not in the specific disability programs we selected. The majority of the individuals with serious medical conditions from our 12 selected states had an active CDL. Specifically, as shown in figure 1, of the 563,000 CDL holders receiving full disability benefits, about 135,000 of those individuals were from our 12 selected states. About 114,000 of these 135,000 individuals, or about 85 percent, had an active CDL according to CDL data provided by the 12 selected states. Further, our analysis of the state CDL data indicates that most of the licenses were issued after the commercial driver was found to be eligible for full disability benefits. Specifically, about 85,000 of the 135,000 individuals, or about 63 percent, had their CDL issued after the federal agency determined that the individual met the federal requirements for full disability benefits according to data from our four selected federal agencies. See appendix III for details for each selected state for the number of (1) commercial drivers with active CDLs, (2) commercial drivers with an active CDL even though they had a medical condition from which they received full federal disability benefits, and (3) commercial drivers that were issued a CDL after the driver was approved for full federal disability benefit payments. Because much of the determination of the medical fitness of commercial drivers relies on subjective factors, and because there are ways to circumvent the process (as shown below), it is impossible to determine the extent to which these commercial drivers have a medical condition that would preclude them from safely driving a commercial vehicle. As such our analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. However, because these individuals are receiving full disability benefits, it is likely that these medical conditions are severe. Further, our analysis also showed that over 1,000 of these drivers are diagnosed with vision, hearing, or seizure disorders, which are medical conditions that would routinely deny the granting of a CDL. Our investigations detail examples of 15 cases where careful medical evaluations did not occur on commercial drivers who were receiving full medical disability benefits. The case studies were selected from approximately 30,000 individuals from Florida, Maryland, Minnesota, and Virginia that had their CDL issued after the federal agency determined that the individual met the federal requirements for full medical disability benefits. For all 15 cases, we found that the states renewed the drivers’ CDLs after the drivers were found by the federal government to be eligible for full disability benefits. For more detailed information on criteria for selection of the 15 cases, see appendix I. On the basis of our investigation of these 15 cases, we identified instances where careful medical examinations did not occur. Most states do not require commercial drivers to provide medical certifications to be issued a CDL. Instead, many states only require individuals to self-certify that a medical examiner granted them a medical certification allowing them to operate commercial vehicles, thus meeting the minimum federal requirements. As a result, we found several commercial drivers who made false assertions on their self-certification that they received a medical certification when in fact no certification was made. For more information on state requirements for medical certifications, see appendix IV. In addition, our investigations found that commercial drivers produced fraudulent documentation regarding their medical certification. Specifically, we found instances where commercial drivers forged a medical examiner’s signature on a medical certification form. In addition, we also found a driver who failed to disclose to the medical examiner that another doctor had prescribed him morphine for his back pain. Finally, our investigations found certain medical examiners did not follow the federal requirements in the determination of medical fitness of commercial drivers. For example, one medical examiner represented to GAO that she did not know that a driver’s deafness would disqualify the individual from receiving a medical certification. Table 2 highlights 5 of the 15 drivers we investigated. For all cases we investigated, the CDL was issued after the driver’s disability benefits started. Appendix V provides details on the other 10 cases we examined. We are referring all 15 cases to the respective state driver license agency for further investigation. The following provides illustrative detailed information on three of the cases we examined. Case 1: A bus driver in Maryland has been receiving Social Security disability benefits since March 2006 due to his heart conditions. Specifically, the driver had open heart surgery in 2003 to repair a ruptured aorta, had a stroke in 2005, and shortly thereafter had another surgery to replace a heart valve. In June 2006, approximately 3 months after Social Security determined the driver was fully disabled; the Maryland driver license agency renewed his CDL for 5 years with a “Passenger” endorsement. The bus driver provided our investigator a forged medical certificate. Specifically, we found that the medical certificate did not have the required medical license number, the physician did not have any record that the bus driver underwent a medical examination for a CDL, and the physician denied conducting a CDL medical exam or signing the medical certificate. Surprisingly, the medical practice also had a copy of the forged medical certificate in its files. The medical practice’s staff stated, however, that it is not uncommon for a patient to bring documents to the office and ask that they be stored in their medical records. The driver’s CDL does not expire until 2011. Case 2: A Virginia truck driver has received SSA disability benefits for over 10 years. The driver’s disability records indicate that that driver had multiple medical conditions, including complications due to an amputation, and that the driver is “also essentially illiterate.” The truck driver has a prosthetic right leg resulting from a farm accident. Although the driver possesses a current medical certificate, the medical examiner did not specify on the medical certificate that it is only valid when accompanied with a Skills Performance Evaluation (SPE) certificate. To test his prosthetic leg, the truck driver stated that he was asked to push the medical examiner across the room in a rolling chair with the prosthetic leg. In our investigation, we attempted to contact the medical examiner but discovered that he is no longer employed by that clinic. The state revoked his medical license due to illegally distributing controlled substances. In 2006, the truck driver was involved in a single vehicle accident when the load in his truck shifted when making a turn and the truck overturned. Prior to October 2007, the truck driver had a CDL with both “Tanker” and “Hazmat” endorsements. In October 2007, the state driver license agency renewed his CDL with a “Tanker” endorsement, which will not expire until 2012. Case 3: A bus driver has been receiving Social Security disability benefits since 1994 for chronic obstructive pulmonary disorder (COPD). The bus driver currently uses three daily inhalers to control his breathing and has a breathing test conducted every 6 months. The bus driver stated that he “gets winded” when he walks to his mailbox and he “occasionally blacks out and forgets things.” However, the driver stated that he has no problem driving a bus, however, he cannot handle luggage or perform any other strenuous duties. Despite not possessing a valid medical certificate, companies continue to hire him as a bus driver on an ad hoc basis. For example, the driver drove a passenger bus as recently as 1 month prior to the time of our interview. The driver stated that the companies have not asked to see his medical certificate. He further stated that because most companies are “hurting for drivers,” they “don’t ask a lot of questions” and pay many of their drivers in cash. The driver’s CDL expires in 2010. We provided a draft of our report to DOT for review and comment. We received e-mail comments on the draft on June 16, 2008, from FMCSA’s Office of Medical Programs. In FMCSA’s response, FMCSA stated that our first objective implies that individuals who are fully disabled have severe medical conditions that may also prevent safe driving. FMCSA stated the following: Disability, even full disability associated with a diagnosis, does not necessarily mean that an individual is medically unfit to operate a commercial vehicle. Disability is not related necessarily to when a medical condition occurred or recurs. The onset of a disease or disabling medical condition is more relevant to medical fitness than when the disability benefits and payments began. As an example, a fully disabled individual may have accommodated to the disability and may improve with treatment while receiving lifelong disability payments. In general, a medical diagnosis alone is not adequate to determine medical fitness to operate a commercial vehicle safely. As an example, multiple sclerosis, while disabling, has several progressive phases, and is not necessarily disqualifying. In addition, FMCSA did not believe that we accurately characterized the 15 cases where careful medical evaluations did not occur. FMCSA stated that this implies these drivers were evaluated by someone for medical fitness for duty, but in 9 cases, the driver was not certified or not evaluated by a medical examiner. We believe our report clearly acknowledges that it is impossible to determine the extent to which these commercial drivers have medical conditions that would preclude them from safely driving a commercial vehicle. In the report, we state that commercial drivers with serious medical conditions can still meet DOT medical fitness requirements to safely operate a commercial vehicle and thus hold CDLs. Further, our report acknowledged that because medical determinations rely in large part on subjective factors that are not captured in databases, it is impossible to determine from data mining and matching the extent to which commercial drivers have a medical condition that precludes them from safely driving a commercial vehicle and therefore if the certification process is effective. Thus, our analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. We also believe that we fairly characterize that all 15 cases did not have a careful medical evaluation. For all 15 cases that we reviewed, we found that the medical evaluation was not adequate or did not occur. Thus, we conclude that a careful medical evaluation did not occur for all 15 drivers in our case studies. FMCSA also provided us a technical comment which we incorporated in the report. As agreed with your offices, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Secretary of Transportation. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-6722 or [email protected] if you have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To determine to the extent possible the number of individuals holding a current commercial driver license (CDL) who have serious medical conditions, we presumed that individuals receiving full federal disability benefits were eligible for these benefits because of the seriousness of their medical conditions. As such, we obtained and analyzed the Department of Transportation’s (DOT) Commercial Driver License Information System (CDLIS) database as of May 2007. For the Social Security Administration (SSA) and the Department of Veterans Affairs (VA) we provided the CDLIS commercial driver information to those agencies. SSA and VA then matched the commercial drivers to the individuals receiving benefits for their disability programs and provided us those results. We also obtained and analyzed the recipient files for four additional federal disability programs. These include the Office of Personnel Management’s (OPM) civil service retirement program and the three programs administered by the Department of Labor: Black Lung, Federal Employee Compensation Act, and the Energy Employees Occupational Illness Compensation Program. We matched the CDL holders from CDLIS to the four federal disability recipient files based on social security number, name, and date of birth. We further analyzed the CDL and disability data to ensure that the commercial drivers met the following criteria: the individual must be currently receiving disability benefits, and the individual must be identified as 100 percent disabled according to the program’s criteria. Because CDLIS is an archival database, the CDLIS data contain information on expired CDLs. To identify the active drivers within CDLIS, we obtained CDL data from a nonrepresentative selection of 12 states. The 12 selected states, representing about 42 percent of all CDLs contained in CDLIS, are: California, Florida, Illinois, Kentucky, Maryland, Michigan, Minnesota, Montana, Tennessee, Texas, Virginia, and Wisconsin. The 12 states were selected primarily based on the size of the CDL population. Because commercial drivers may contract a serious medical condition after the issuance of the CDL, we also determined the number of individuals that received their CDL subsequent to when the federal agencies determined the individual to be eligible for full disability benefits. Our estimate does not include drivers with severe medical conditions that are not in the selected programs we analyzed. We matched the 12 state CDL files to the six CDLIS-disability match files based on driver license number, and identified those CDLs that were current based on license status. To provide case-study examples of commercial drivers who hold active CDLs while also receiving federal disability payments for a disqualifying medical condition, we focused on four states—Florida, Maryland, Minnesota, and Virginia. From these four states, we selected, in a nonrepresentative fashion, 15 commercial drivers for detailed investigation. We identified these driver cases based on our data analysis and mining. For each case, we interviewed, as appropriate, the commercial driver, the driver’s employer, and the driver’s physician to determine whether the medical condition should have precluded the driver from holding a valid CDL. For these 15 cases, we also reviewed state department of motor vehicle reports, police reports, and other public records. To determine the reliability of DOT’s CDLIS data, we used SSA’s Enumeration and Verification System to verify key data elements in the database that were used to perform our work. For the federal disability databases, we assessed the reliability of the data from SSA and VA, which comprise 99 percent of the CDLIS-disability matches. To verify its reliability, we reviewed program logic used by the agencies to match the CDLIS data with their federal disability recipients. We also reviewed the current Performance and Accountability Reports for the agencies to verify that their systems had successfully undergone the required stewardship reviews. For the 12 selected states’ CDL databases, we performed electronic testing of the specific data elements in the database that were used to perform our work. In addition, for 5 of the 12 states we verified the query logic used to create the CDL extract files. For the other 7 states we were unable to obtain the query logic. We performed our investigative work from May 2007 to June 2008 in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Federal regulations require that commercial drivers be examined and certified by a licensed medical examiner, such as licensed physician, physician’s assistant, and nurse practitioner, to ensure they meet minimum physical qualifications prior to driving. It is the responsibility of both drivers and motor carriers employing drivers to ensure that drivers’ medical certificates are current. According to federal regulations, the medical examiner must be knowledgeable about the regulatory physical qualifications and guidelines as well as the driver’s responsibilities and work environment. In general, the medical certification procedures include the following steps: The driver completes and certifies a medical certification form that includes information about the driver’s health history. The form is provided to the medical examiner as part of the examination. The medical examiner discusses the driver’s health history and the side effects of prescribed medication and common over-the-counter medications. The medical examiner tests the driver’s vision, hearing, blood pressure, pulse rate, and urine specimen (for testing sugar and protein levels). The medical examiner conducts a physical examination and makes a determination on driver fitness. If the medical examiner determines the driver is fit to drive, he/she signs the medical certificate, which the driver must carry with his/her license. The certificate must be dated. The medical examiner keeps a copy in his/her records, and provides a copy to the driver’s employer. When the medical examiner finds medical conditions that prevent certification of the physical condition of the driver and this finding is in conflict with the findings of another medical examiner or the driver’s personal physician, the driver can apply to the Federal Motor Carrier Safety Administration (FMCSA) for a determination. Federal regulations and the accompanying medical guidance provide criteria to the medical examiners for determining the physical condition of commercial drivers. Although the medical examiner makes the determination as to whether the driver is medically fit to operate a commercial vehicle, the following provides a general overview of the nature of the physical qualifications: no loss of physical limbs, including a foot, a leg, a hand, or an arm; no impairment of limbs that would interfere with grasping or their ability to perform normal tasks; no established medical history or clinical diagnosis of diabetes currently requiring insulin for control, respiratory dysfunction, or high blood pressure that would affect their ability to control or drive a commercial motor vehicle; no current diagnosis of a variety of coronary conditions and cardiovascular disease including congestive heart failure; no mental disease or psychiatric disorder that would interfere with their ability to drive a commercial vehicle safely; has distant visual acuity and hearing ability that meets stated does not use a controlled substance or habit-forming drug; and has no current clinical diagnosis of alcoholism. When operating a commercial motor vehicle, drivers must have a copy of the medical examiner’s certificate in their possession. Motor carriers, in turn, are required to maintain a copy of the certificate in their files. When drivers are stopped for a roadside inspection, state inspectors can review the medical examiner’s certificate. During compliance reviews of motor carriers, FMCSA investigators may also verify the validity of medical certifications on file with the motor carrier. In the main portion of the report, we state that from the 12 selected states 114,000 commercial drivers had a current commercial driver license (CDL) even though they had a medical condition from which they received full federal disability benefits. Further, approximately 85,000, or about 63 percent of the active commercial drivers, were issued a CDL after the driver was approved for full federal disability benefit payments. Table 3 below provides details by each selected state on the number of (1) commercial drivers with active CDLs, (2) commercial drivers with an active CDL even though they had a medical condition from which they received full federal disability benefits, and (3) commercial drivers that were issued a CDL after the driver was approved for full federal disability benefit payments. The states have adopted different levels of control to verify that commercial driver license applicants meet the Department of Transportation (DOT) medical certification requirements. As shown in figure 2, 25 states, or 50 percent, allow drivers to self-certify that they meet the requirements. The self-certification is often simply a check-box on the application. Eighteen states, or 36 percent, require that the commercial driver show the DOT medical certificate to the driver licensing agency at the time of application. Further, 6 states, or 12 percent, not only require that the driver show the DOT medical certificate at the time of application but also maintain a copy of the certificate in the driving records of the applicant. Finally, 1 state did not respond to the inquiries. Table 2 in the main portion of the report provides information on five detailed case studies. Table 4 shows the remaining case studies that we investigated. As with the five cases discussed in the body of this testimony, we found drivers with a valid commercial driver license (CDL) who also had serious medical conditions. GAO staff who made major contributions to this report include Matthew Valenta, Assistant Director; Sunny Chang; Paul DeSaulniers; Craig Fischer; John V. Kelly; Jeffrey McDermott; Andrew McIntosh; Andrew O’Connell; Philip Reiff; Nathaniel Taylor; and Lindsay Welter. | Millions of drivers hold commercial driver licenses (CDL), allowing them to operate commercial vehicles. The Department of Transportation (DOT) established regulations requiring medical examiners to certify that these drivers are medically fit to operate their vehicles and provides oversight of their implementation. Little is known on the extent to which individuals with serious medical conditions hold CDLs. GAO was asked to (1) examine the extent to which individuals holding a current CDL have serious medical conditions and (2) provide examples of commercial drivers with medical conditions that should disqualify them from receiving a CDL. To examine the extent to which individuals holding CDLs have serious medical conditions, GAO identified those who were in both DOT's CDL database and selected federal disability databases of the Social Security Administration, Office of Personnel Management, and Departments of Veterans Affairs and Labor and have been identified as 100 percent disabled according to the program's criteria. Because DOT's data also include inactive licenses, GAO obtained current CDL data from 12 selected states based primarily on the size of CDL population. To provide case study examples, GAO focused on four states--Florida, Maryland, Minnesota, and Virginia. For 15 drivers identified from data mining, GAO interviewed, as appropriate, the driver, driver's employer, and driver's physician. GAO is not making any recommendations. Commercial drivers with serious medical conditions can still meet DOT medical fitness requirements to safely operate a commercial vehicle and thus hold CDLs. However, there is general agreement that careful medical evaluations are necessary to ensure that serious medical conditions do not preclude the safe operation of a commercial vehicle. Because medical determinations rely in large part on subjective factors that are not captured in databases, it is impossible to determine from data matching and mining alone the extent to which commercial drivers have medical conditions that preclude them from safely driving a commercial vehicle and therefore if the certification process is effective. GAO's analysis provides a starting point for exploring the effectiveness of the current CDL medical certification process. Our analysis of commercial license data from DOT and medical disability data from the Social Security Administration, Office of Personnel Management, and Departments of Veterans Affairs and Labor found that about 563,000 of such individuals had commercial driver licenses and were determined by the federal government to be eligible for full disability benefits. This represented over 4 percent of all commercial driver licenses in the DOT database. Our analysis of 12 selected states indicates that most of these commercial drivers still have active licenses. Specifically, for these 12 selected states, about 85 percent had a current CDL even though they had a medical condition from which they received full federal disability benefits. The majority of these drivers were issued a CDL after the driver was approved for full federal disability benefit. Our investigations detail examples of 15 cases where careful medical evaluations did not occur on commercial drivers who were receiving full disability benefits for serious medical conditions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
AGOA provides eligible SSA countries duty-free access to U.S. markets for more than 6,000 dutiable items in the U.S. import tariff schedules. SSA countries are defined in Section 107 of AGOA as the 49 sub-Saharan African countries potentially eligible for AGOA benefits listed in that provision. As a trade preference program, AGOA supports economic development in sub-Saharan Africa through trade and investment and encourages increased trade and investment between the United States and SSA countries as well as intra-SSA trade. In addition, AGOA benefits may lead to improved access to U.S. credit and technical assistance, according to the Department of Commerce’s website and officials from the Departments of Commerce and Labor. AGOA authorizes the President each year to designate an SSA country as eligible for AGOA trade preferences if the President determines that the country has met or is making continual progress toward meeting AGOA’s eligibility criteria, among other requirements. For the purposes of this report, we have organized the act’s eligibility criteria into three broad reform objectives: economic, political, and development (see table 1). In addition, the act requires that an SSA country be eligible for the Generalized System of Preferences (GSP) in order to be considered for AGOA benefits. The U.S. government’s African Growth and Opportunity Act Implementation Guide states that an SSA country must also officially request to be considered for AGOA benefits. Over the lifetime of AGOA, 47 of the 49 SSA countries listed in the act have requested consideration for AGOA eligibility, according to USTR officials. Figure 1 shows a map of Africa that identifies the 39 SSA countries that were eligible for AGOA benefits and the 10 SSA countries that were ineligible for AGOA benefits as of January 1, 2015. The U.S. government uses the annual eligibility review process and forum mandated by AGOA to engage with sub-Saharan African countries on their progress toward economic, political, and development reform objectives reflected in AGOA’s eligibility criteria. USTR manages the annual consensus-based review process, which begins by collecting information from the public and other agencies of the AGOA Implementation Subcommittee. For SSA countries experiencing difficulty meeting one or more eligibility criteria, the U.S. government may decide on specific engagement actions to encourage reforms in specific areas. Over the lifetime of AGOA, 13 countries have lost their AGOA eligibility, although 7 countries eventually had their eligibility restored. The U.S. government uses the annual AGOA Forum to further engage with representatives from sub-Saharan Africa on challenges and encourage progress on AGOA’s economic, political, and development reform objectives. The AGOA Implementation Subcommittee of the TPSC conducts the AGOA eligibility review annually to discuss whether a country has established or is making continual progress toward AGOA’s reform objectives and makes consensus-based recommendations on each country’s eligibility. USTR’s Office of African Affairs oversees the implementation of AGOA and chairs the AGOA Implementation Subcommittee. The full TPSC must review and approve the subcommittee’s recommendations. The recommendations are then forwarded to the U.S. Trade Representative for review and approval. Once the recommendations are approved, the U.S. Trade Representative sends the recommendations to the President. The President makes the final decision on AGOA eligibility. The flow diagram in figure 2 provides an overview of the AGOA eligibility review process, organized into three phases: (1) initiation and data collection, (2) development of subcommittee recommendations, and (3) review and approval by the TPSC, USTR, and the President. Phase 1: initiation and data collection. Generally, USTR begins the annual eligibility review process in September or October by requesting that the agencies that form the AGOA Implementation Subcommittee— the Departments of Agriculture, Commerce, Labor, State, and the Treasury; USAID; Council of Economic Advisers; and National Security Council—provide information about each country’s progress on reform objectives related to the eligibility criteria. USTR also requests public comments at this time. These agencies generally prepare and submit their reports to USTR by mid-October. State also distributes information collected by overseas staff on progress made by SSA countries on AGOA’s reform objectives to the other members of the subcommittee to help inform the development of their reports. Subcommittee agencies frequently provide in-depth information related to the AGOA eligibility criteria that are most pertinent to their specific mission but may also provide input related to other eligibility criteria. For example, while the Department of Labor’s reports primarily focus on labor issues, its reports on each country may also include information related to other eligibility criteria, such as human rights and the rule of law. (Table 2 identifies the primary focus of each subcommittee agency and the related AGOA reform objectives and corresponding eligibility criteria.) In phase 1 of the eligibility review process, USTR also publishes a notice in the Federal Register requesting public comment on SSA countries eligible to receive AGOA benefits. In 2013, USTR received 11 comments from a range of sources, including SSA governments, SSA private companies, a U.S. industry organization, a private U.S. citizen, a federation of U.S. unions, and a coalition of trade associations. The Federal Register notice and a presidential proclamation that finalizes eligibility decisions are the only components of the eligibility review process that are public. Phase 2: development of subcommittee recommendations. USTR compiles the information provided by each subcommittee agency in phase 1 into a paper on each country. These papers also include broad- ranging information that USTR staff provide and any public comments that USTR receives in response to its notice in the Federal Register. USTR distributes the country papers to members of the AGOA Implementation Subcommittee for review and discussion at the subcommittee meeting. Typically, the AGOA Implementation Subcommittee convenes in November to review each country’s progress in establishing or making continual progress toward AGOA’s reform objectives. Usually, over a period that may range from a few days to a few weeks, the subcommittee works through each agency’s priorities and viewpoints on each country’s progress on the eligibility criteria, according to agency officials. The duration of this phase varies depending on how quickly the agencies can reach consensus. Any differences in perspective regarding countries’ progress are discussed and consensus-based recommendations are reached. For example, the U.S. Department of Agriculture has regularly raised concerns about progress on economic reform objectives in certain SSA countries, such as import bans and procedures to control pests and diseases in agricultural products. Countries have received démarches or letters for such issues; however, the subcommittee has not recommended that a SSA country lose its AGOA eligibility because of market access issues, according to USTR officials. Phase 3: review and approval by the TPSC, the U.S. Trade Representative, and the President. The subcommittee’s recommendations are presented to the full TPSC for review and approval. After the TPSC reaches consensus, USTR staff prepare a decision memorandum for the U.S. Trade Representative’s approval. The TPSC, the U.S. Trade Representative, and the President have the authority to modify the subcommittee’s recommendations, according to agency officials. The U.S. Trade Representative prepares a decision memo with recommendations to the President for approval. Then, generally in December, the President issues a proclamation that implements any changes to SSA countries’ AGOA eligibility status. The proclamation is published in the Federal Register. Regardless of a country’s eligibility status, the U.S. government uses the eligibility review as one of many tools to initiate conversations with SSA countries about economic, political, and development reforms, according to agency officials. The subcommittee reviews each country individually, considering each country’s particular situation, to determine how best to encourage progress toward specific eligibility criteria. The TPSC reviews the subcommittee’s recommendations and makes the ultimate decision on specific actions the U.S. government can take to encourage countries to address particular concerns related to the eligibility criteria. For example, the TPSC may determine that the relevant U.S. ambassador, or other U.S. government official, should meet with appropriate country representatives. Other possible actions include issuing démarches or letters that describe the eligibility criteria concerns and outline actions the country may take to address those concerns. In some cases, the TPSC may recommend specific steps a country should take to maintain or restore its AGOA eligibility. After the TPSC’s concerns are communicated to the country, relevant U.S. government officials manage engagement with the country and report back to the subcommittee on the country’s progress. Although the eligibility review is annual, interim eligibility reviews may be held to gauge the progress countries are making on specific eligibility criteria. For example, in October 2011, an interim review reinstated AGOA eligibility for Côte d’Ivoire, Guinea, and Niger. All three countries had lost AGOA eligibility because of undemocratic changes in government and then regained eligibility following free and fair elections. The following example illustrates how the U.S. government uses the eligibility review process to engage with SSA countries on issues related to specific reform objectives: Swaziland was deemed eligible for AGOA in January 2001. However, several years ago, the U.S. government began engaging with Swaziland on concerns related to internationally recognized labor rights through a series of letters and démarches issued by USTR and State. Over the course of several years, Swaziland made some progress on labor issues, but conditions related to labor rights later deteriorated. U.S. government officials met several times with Swaziland officials to discuss steps to improve labor rights, including a USTR-led interagency trip in April 2014. In particular, the officials were concerned that Swaziland had failed to make continual progress in protecting freedom of association and the right to organize. The U.S. officials were also concerned by Swaziland’s use of security forces and arbitrary arrests to stifle peaceful demonstrations, and the lack of legal recognition for labor and employer federations. Despite U.S. efforts to engage with the country’s government, Swaziland failed to make the necessary reforms. In June 2014, an interim review resulted in the President declaring Swaziland ineligible, effective as of January 1, 2015. Over the lifetime of AGOA, 13 SSA countries have lost their AGOA eligibility for not meeting certain eligibility criteria, although 7 of these countries eventually had their AGOA eligibility restored. As of January 1, 2015, the 49 SSA countries fell into four categories based on their history of AGOA eligibility. (App. II provides a list of the SSA countries by eligibility status.) Eligibility lost and regained. Seven countries had lost AGOA eligibility at some time in the past but later regained it. Five of the countries experienced coups, one country lost eligibility after its President extended his term in violation of the country’s constitution, and one country lost eligibility because of political unrest and armed conflict. All seven countries had their AGOA beneficiary status restored following a return to democratic rule. (Fig. 3 provides additional information regarding SSA countries that have lost and regained AGOA eligibility.) Eligibility lost and not regained. Six SSA countries have lost and not regained AGOA eligibility. One lost eligibility following a coup; three were deemed ineligible because of concerns about human rights abuses; one lost eligibility because of issues with labor rights; and one country lost eligibility following political violence and armed conflict. (Fig. 4 provides additional information regarding SSA countries that have lost and not regained AGOA eligibility.) Eligibility never lost. About two-thirds of SSA countries, 32 of 49 countries have maintained their AGOA eligibility status since it was first granted. Six of 32 were not deemed eligible when AGOA was originally enacted in 2000. Although these countries had expressed interest in the AGOA trade preference program, they did not initially satisfy the eligibility criteria but later obtained eligibility for benefits under AGOA at different times. Never eligible. Four SSA countries have not been eligible for AGOA. Somalia and Sudan have not expressed official interest in the AGOA trade preference program, according to agency officials. Zimbabwe and Equatorial Guinea have not been deemed eligible because of concerns related to AGOA’s eligibility criteria. The AGOA Forum is required under AGOA. Its purpose is to foster close economic ties between the United States and SSA countries; however, the forum also supports AGOA reform objectives by holding sessions that specifically address AGOA eligibility criteria. The AGOA Forum is generally held in alternate years in the United States and sub-Saharan Africa and supports AGOA’s reform objectives by facilitating high-level dialogue between the U.S. and SSA governments. The forum also engages the business community and civil society organizations. Generally, the forum takes place over 2 to 3 days and includes three to eight plenary sessions and several breakout sessions as well as workshops. Speakers are typically high-level U.S. and SSA government officials; however, speakers also include officials representing organizations such as the African Union and the United Nations Economic Commission for Africa. A number of U.S. congressional delegations have also participated in the forum. Civil society and private sector groups such as the Economic Justice Network and the African Cotton and Textile Industries Federation also actively participate in the forums. The theme of the AGOA Forum changes from year to year, but the discussions are centered on strengthening the economic connection between the United States and sub-Saharan Africa. For example, the theme of the December 2003 forum, hosted by the United States was “Building Trade, Expanding Investment” and the theme of the August 2013 forum, hosted by the Ethiopian government, was “Sustainable Transformation through Trade and Technology.” The 2014 AGOA Forum consisted of a 1-day ministerial meeting that took place during the first U.S.-Africa Leaders Summit in Washington, D.C. This summit included leaders from SSA countries and other parts of Africa. (Table 3 provides the location and theme of each AGOA Forum from 2001 through 2014.) Although the annual AGOA Forums are trade-oriented, they also facilitate further engagement between the United States and SSA countries through dialogue about the reform objectives reflected in AGOA eligibility criteria. Throughout the years, AGOA Forum workshops have focused on a number of the eligibility criteria, including good governance, intellectual property rights, health care, and labor rights. For example, at the 2013 AGOA Forum in Addis Ababa, a session co-chaired by Liberian and U.S. senior government officials highlighted the importance of labor rights in achieving economic growth. As another example, breakout sessions at the 2009 and 2011 AGOA Forums focused on the relationship between good governance and the investment environment. During the forums, U.S. and SSA government officials also hold bilateral meetings to discuss specific issues related to AGOA’s reform objectives and eligibility criteria, according to agency officials. AGOA-eligible countries have fared better than ineligible countries on some economic development indicators since AGOA was enacted, according to our analysis of economic data for SSA countries that were eligible and ineligible for AGOA in 2012; however, AGOA’s impact on economic development is difficult to isolate when additional factors are taken into consideration. Other factors—such as the small share of AGOA exports in the overall exports of many AGOA-eligible countries, the role of petroleum exports in recent income growth, the quality of government institutions, and different levels of foreign aid and investment—make it difficult to isolate how much economic development can be attributed to AGOA. For example, AGOA exports are a small share of overall exports for the majority of AGOA-eligible countries, a fact that could limit AGOA’s impact on economic development in these countries. We found evidence that increasing energy prices may also have contributed to income growth within AGOA-eligible countries: from 2000 through 2012, the top three AGOA-eligible petroleum-exporting countries had a much higher growth rate for income per person than other AGOA-eligible countries. We also found that AGOA-eligible countries on average had higher governance scores and received more foreign aid and investment compared with ineligible countries. While these differences may have been facilitated by AGOA eligibility, they may also have contributed to economic development in AGOA-eligible countries, a possibility that makes it difficult to isolate AGOA’s impact on economic development. Both before and since AGOA was enacted in 2000, income per person has been higher in AGOA-eligible countries, on average, compared with ineligible countries. The average annual income per person for 37 AGOA-eligible countries was $876 in 2000, prior to AGOA’s implementation, and $1,132 in 2012. The variation in income per person among the eligible countries was large; for example, in 2012, Seychelles had the highest income per person at $14,303 and Burundi had the lowest at $153. For 8 AGOA ineligible countries, average annual income per person was $353 in 2000 and $450 in 2012. Among the ineligible countries, income per person also varied widely. In 2012, Equatorial Guinea had the highest income per person at $14,199, whereas the Democratic Republic of Congo had the lowest at $165. The average annual growth in income per person was slightly higher in AGOA-eligible countries: eligible countries’ income per person on average grew 2.2 percent per year from 2000 to 2012, compared with 2.1 percent per year in ineligible countries. Figure 5 shows trends in annual income per person from the enactment of AGOA through 2012, for eligible and ineligible countries. (For additional details on each country’s annual income per person before and after AGOA, see app. III.) Exports under AGOA have accounted for a small proportion of exports for most AGOA-eligible countries. Our analysis shows that in 2013 AGOA exports accounted for less than 0.5 percent of overall exports for the majority of countries—for these countries, the small proportion of AGOA exports in their overall exports could limit AGOA’s impact on economic development. Figure 6 shows the number of AGOA-eligible countries in 2013 separated into categories based on the level of their exports under AGOA, as a share of their overall exports. For example, 4 of the AGOA-eligible countries had no AGOA exports at all in 2013, and in the same year, AGOA accounted for less than 5 percent of overall exports for 26 other AGOA-eligible countries. In 2013, AGOA accounted for more than half of overall exports for only 1 country, Chad, a top petroleum exporter among AGOA-eligible countries. While AGOA-eligible countries have had higher income per person than ineligible countries, the fastest growth in income per person has been concentrated in a few petroleum-exporting AGOA-eligible countries. From 2001 to 2013, petroleum products accounted for over 80 percent of U.S. imports under AGOA. Among AGOA-eligible countries, we identified Nigeria, Angola, and Chad as the top three petroleum exporting countries based on trade data in 2013. These countries collectively accounted for 90 percent of all petroleum exports to the United States under AGOA in 2013. When we separated out these countries in our analysis, we found that from 2001 through 2012 the top three AGOA-eligible petroleum exporting countries as a group had, on average, slightly lower levels of annual income per person compared with all other AGOA-eligible countries considered as a group: $960 versus $1,026. However, figure 7 shows that from 2000 through 2012, these top three petroleum exporters had a much higher average annual growth rate as measured in income per person compared with the other AGOA-eligible countries: 4.5 percent per year versus 1.4 percent per year. The difference in income-per- person growth between the top three petroleum exporters and the other AGOA-eligible countries can be explained partly by rising energy prices.From 2000 through 2012, global prices for petroleum increased by 272 percent. Prior to AGOA’s implementation in 2000, the group of SSA countries eligible for AGOA benefits in 2012 had higher governance scores than ineligible countries. Academic studies have found a positive relationship between the quality of governance institutions and economic growth. Therefore, gains in economic growth since 2000 among AGOA-eligible countries may have been driven to some degree by governance that was more conducive to economic development. We analyzed two measures of institutional quality from the Worldwide Governance Indicators that capture some aspects of the security of private property, namely scores for the rule of law and political stability. We found that AGOA-eligible countries had substantially higher scores on both rule of law and political stability in 2000 than countries that were not eligible for AGOA (see fig. 8). Pre-existing differences in institutional quality scores could explain in part why AGOA-eligible countries on average had higher annual income per person and slightly higher growth in annual income per person after the implementation of AGOA. According to our analysis of the AGOA eligibility review process, given that governance is considered in the annual AGOA eligibility review, AGOA-eligible countries may also have benefited from an ongoing incentive to sustain or improve the quality of their governance institutions. Figure 8 shows that the differences in governance scores between eligible and ineligible countries in 2012 were similar to those in 2000. These persistent differences in the quality of governance institutions could also have contributed to the differences in economic growth between AGOA-eligible countries and ineligible countries after the implementation of AGOA. AGOA-eligible countries on average have received more foreign aid per person and higher foreign direct investment (FDI) than ineligible countries since the implementation of AGOA. The different levels of foreign aid and FDI, which could play a role in economic development and poverty reduction, also may have contributed to the differences in income per person between AGOA-eligible countries and ineligible countries that we observed. Moreover, according to our analysis of aid and investment flows to SSA countries (below), being eligible for AGOA may have improved the ability of countries to attract aid and investment. Our analysis shows that on average AGOA-eligible countries received more foreign aid per person than ineligible countries. We analyzed data on country programmable aid (CPA) from the Organisation for Economic Co-operation and Development (OECD). According to the OECD, CPA captures the main cross-border aid flows to recipient countries and excludes some forms of official development assistance that are neither fully transparent to, nor manageable by, recipient countries, including humanitarian aid in response to crises and natural disasters, and debt relief provided by donor nations. The United States allocated an estimated $7.04 billion in U.S. bilateral aid to Africa in fiscal year 2014. The aid was intended to help SSA countries in areas including health; climate change; food security; and, more recently, power. From 2000 to 2012, AGOA-eligible countries received more than twice as much aid per person on average than ineligible countries (see fig. 9). AGOA-eligible countries on average also received more FDI than ineligible countries. According to a 2014 U.S. International Trade Commission report, global inflows from FDI into SSA countries increased almost sixfold between 2000 and 2012. We analyzed FDI as a share of a country’s gross domestic product (GDP) to take into consideration the size of the country’s economy. From 2001 to 2013, the amount of FDI each SSA country received relative to the size of its overall economy varied considerably. For example, among SSA countries that were net recipients of FDI in 2013, Burundi received FDI amounting to less than half a percent of its GDP (the lowest in sub-Saharan Africa), while Liberia received FDI amounting to about 57 percent of its GDP (the highest in sub-Saharan Africa). From 2001 through 2013, AGOA-eligible countries received FDI that on average amounted to about 5.6 percent of GDP, while ineligible countries averaged about 2.7 percent. (See fig. 10.) Being eligible for AGOA may help a country attract aid and investment. For example, AGOA eligibility can be seen as a signal of a relatively stable political environment as well as advantages in tariff treatment for certain products. According to a recent report by the U.S. International Trade Commission, AGOA has signaled improvements in the business and investment climate in SSA countries, and has contributed to increasing FDI flows to these countries. Additionally, the International Monetary Fund reported in June 2014 that in Swaziland uncertain prospects for AGOA eligibility could affect investment and employment in the textile sector. Similarly, Ethiopian government officials in the Ministry of Trade said that AGOA has helped to attract foreign direct investment to Ethiopia. Our analysis of factors contributing to economic development in SSA countries and review of academic literature suggest that isolating AGOA’s impact on overall economic development is difficult. We found that on average, AGOA-eligible countries have had higher annual income per person and slightly higher growth rates in annual income per person than ineligible countries; we also found evidence suggesting that AGOA eligibility might be associated with other factors that also can positively affect development. For example, our review of academic literature indicated that increased FDI could enhance countries’ economic growth, and our analysis demonstrated that on average AGOA-eligible countries receive more FDI inflows relative to the size of their economies. We are not making any recommendations in this report. We provided a draft of this report for comment to the Departments of Agriculture, Commerce, Labor, State, and the Treasury; USAID; and the Office of the U.S. Trade Representative (USTR). The Departments of Labor, State, the Treasury, and USTR provided technical comments, which we have incorporated in the report, as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretaries of Agriculture, Commerce, Homeland Security, Labor, State, and the Treasury; the Administrator of USAID; and the U.S. Trade Representative. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objectives were to examine (1) how the African Growth and Opportunity Act (AGOA) eligibility review process has considered and the AGOA Forums have supported economic, political, and development reform objectives described in the act and (2) how sub-Saharan African (SSA) countries have fared in certain economic development outcomes since the enactment of AGOA. To examine how the U.S. government’s process for determining AGOA eligibility and the AGOA Forums have supported reform objectives established in sections 104 and 105 of the act, we reviewed the AGOA legislation and documents from the seven U.S. agencies relevant to the AGOA eligibility criteria. We analyzed the AGOA eligibility status of SSA countries and the implementation of AGOA Forum activities since AGOA’s original enactment to identify changes in eligibility from 2000 through January 2015. We also attended and observed 2014 AGOA Forum events. To address both objectives, we interviewed officials from the Departments of Agriculture, Commerce, Labor, State, and the Treasury; the U.S. Agency for International Development; and the Office of the U.S. Trade Representative (USTR), all of which are the members of the Trade Policy Staff Committee’s AGOA Implementation Subcommittee that generally prepare the sub-Saharan Africa country reports for the annual eligibility review. To examine the relationship between AGOA eligibility and economic development in sub-Saharan Africa, we analyzed data on gross domestic product (GDP) per capita and total population from the World Bank World Development Indicators. We used data from the April 2014 version of the World Development Indicators. We compared population-weighted average GDP per capita at the end of 2012 for AGOA-eligible countries versus ineligible countries, as well as for the top three AGOA-eligible petroleum exporting countries versus other AGOA-eligible countries. We also compared average annual growth rates in annual income per person from 2000 to 2012 for AGOA-eligible versus ineligible countries, as well as for the top three AGOA-eligible petroleum exporting countries versus other AGOA-eligible countries. To study sub-Saharan African countries’ exports under AGOA as a share of total exports as well as the value of petroleum exports to the United States under AGOA, we used U.S. Census trade data on imports by trading partners and imports by product from 2013. We used data on countries’ total exports from the International Monetary Fund’s Direction of Trade Statistics and International Financial Statistics databases. We calculated AGOA-eligible countries’ shares of AGOA and Generalized System of Preference (GSP) exports in their overall exports to study how the value of exports under these trade preference systems compared with the value of overall exports for AGOA- eligible countries in 2013. To study differences in the quality of governance institutions between AGOA-eligible and ineligible countries, we analyzed data on governance from the World Bank Worldwide Governance Indicators, comparing average scores for Political Stability and Rule of Law in 2000 and 2012 between AGOA-eligible and ineligible countries. To describe the differences in the amount of foreign development assistance and foreign direct investment received by AGOA-eligible and ineligible countries, we used data on country programmable aid from the Organisation for Economic Co-operation and Development (OECD) and foreign direct investment (FDI) as a percentage of GDP from the World Development Indicators. We compared yearly averages of aid per capita (from 2000 to 2012) and net FDI inflows as a percentage of GDP (from 2001 to 2013) between AGOA- eligible and ineligible countries. To assess the reliability of these data, we reviewed publicly available documents on these databases and conducted electronic testing for missing values and outliers. We determined that the data were sufficiently reliable for our purposes. We also reviewed a judgmental sample of peer-reviewed academic literature related to economic development, foreign direct investment, foreign aid, and the impact of trade preference programs. Country classifications. For most of the analysis, we defined AGOA- eligible countries as the 40 SSA countries that were deemed eligible for AGOA benefits as of the end of 2012. Nine SSA countries were ineligible for AGOA benefits as of the end of 2012. We chose 2012 as the base year for this classification because it was the latest year for which data on GDP per capita were available for the SSA countries in the April 2014 version of the World Bank World Development Indicators. The only exception is that for the analysis of the exports under AGOA as a share of total exports from AGOA-eligible countries, we defined AGOA-eligible countries as the 39 countries that were deemed eligible for AGOA benefits as of 2013 because we analyzed 2013 trade statistics. GDP per capita. To study differences and depict trends in income per person between selected groupings of countries, we used the World Development Indicators annual GDP per capita series, expressed in year 2005 U.S. dollars. Thirty-seven out of 40 countries eligible for AGOA benefits in 2012, and 8 out of 9 ineligible countries, reported complete GDP per capita data from 2000 and through 2012. Djibouti, São Tomé and Principe, and South Sudan were excluded from the AGOA-eligible group because of missing data. Somalia was excluded from the ineligible group due to missing data. Within each country grouping, we took the weighted average of countries’ GDP per capita, where the weights are given by the share of a country’s population in the overall group’s population. The weighted average GDP per capita is a measure of the yearly income of the average individual in the country group. Equation (1) shows that the weighted average GDP per capita is equivalent to summing up the GDP of every country in the group and dividing by the total group population: Where 𝑛 denotes the number of countries in the group, 𝑦𝑖 refers to gross domestic product of country i, and 𝐿𝑖 refers to the population of country i. (1) AGOA export share. To examine the magnitude of AGOA exports relative to the total exports of each AGOA-eligible country, we used U.S. Census data on imports by trading partners. We calculated the value of imports under AGOA (i.e., imports that received duty-free access claiming AGOA preference benefits) and imports that received duty-free access under GSP. Since AGOA was established as a program for SSA countries that builds on GSP, we analyzed exports from AGOA-eligible countries to the United States under both programs together. AGOA countries continue to have duty-free access to the commodities covered under the GSP although that program expired in 2013. We computed the AGOA (including GSP) share of exports relative to total exports for each AGOA-eligible country in 2013, and graphically tabulated countries according to their AGOA export share. In this analysis, both the exports data and the definition of AGOA eligibility are from 2013. We used data from two International Monetary Fund databases, Direction of Trade and International Financial Statistics, to determine total exports for each country. Top three AGOA-eligible petroleum exporters. The top three AGOA- eligible petroleum exporters were Nigeria, Angola, and Chad, which collectively accounted for 90 percent of all petroleum exports to the U.S. under AGOA in 2013, based on U.S. Census data on AGOA imports by product. Since AGOA was established as a program for SSA countries that builds on GSP, the 90 percent statistic refers to exports of petroleum from AGOA-eligible countries to the United States under both programs together. AGOA countries continue to have duty-free access to the commodities covered under the GSP although that program expired in 2013. AGOA-eligible countries minus the top petroleum exporters refer to the remaining 34 AGOA-eligible countries. Governance. To examine differences in the quality of governance (also known as “institutions”) between AGOA-eligible and ineligible countries, we reviewed a judgmental sample of empirical academic literature that provided evidence that property rights and political stability can promote economic growth. We judgmentally identified two measures of institutional quality from the Worldwide Governance Indicators that may capture aspects of the security of private property, namely scores for the rule of law and political stability. We compared the simple average of scores in 2000 and 2012 for AGOA-eligible countries versus ineligible countries. We rescaled the indicators to range from 0 to 5, with higher scores indicating better perceptions of governance. Aid and foreign direct investment. To examine differences in the amount of development assistance received by AGOA-eligible versus ineligible countries, we used annual data from the OECD on country programmable aid. According to the OECD, country programmable aid (CPA) is the proportion of aid that is subjected to multiyear programming at the country level, and hence represents a subset of official development assistance (ODA) flows. CPA is equivalent to gross ODA disbursements by recipient but excludes spending that is (1) inherently unpredictable (humanitarian aid and debt relief); or (2) entails no flows to the recipient country (administration costs, student costs, development awareness and research, and refugee spending in donor countries); or (3) is usually not discussed between the main donor agency and recipient governments (food aid, aid from local governments, core funding to nongovernmental organizations, aid through secondary agencies, ODA equity investments, and aid that is not allocable by country). CPA counts loan repayments among the aid transferred from donor countries to developing countries. We represented country programmable aid in per person units by dividing the program aid total by the total population of the country. Data on population were from the World Development Indicators. We computed the simple average of aid per person in each year from 2005 to 2012 for AGOA-eligible countries and AGOA ineligible countries. To examine differences in the amount of foreign direct investment received by AGOA-eligible versus ineligible countries, we used annual data on net inflows of foreign direct investment as a percentage of GDP from the World Bank World Development Indicators. We computed the simple average of these series in each year from 2001 to 2013 for AGOA- eligible countries and AGOA ineligible countries. In using the FDI data, we checked for outliers and missing values and identified Equatorial Guinea as an outlier based on comparisons with data from other sources; values for Equatorial Guinea’s net FDI inflows as a percentage of GDP were omitted from the calculation of the average. We conducted this performance audit from April 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Central African Republic (eligible Oct. 2, 2000; ineligible Jan. 1, 2004) Congo, Democratic Republic of (eligible Oct. 31, 2003; ineligible Jan. 1, 2011) Congo, Republic of (eligible Oct. 2, 2000) Côte d’Ivoire (eligible May 16, 2002; ineligible Jan. 1, 2005; eligibility regained Oct. 25, 2011) Gambia, The (eligible Mar. 28, 2003; Ineligible Jan. 1, 2015) Guinea (eligible Oct. 2, 2000; ineligible Jan. 1, 2010; eligibility regained Oct. 25, 2011) Guinea-Bissau (eligible Oct. 2, 2000; ineligible Jan. 1, 2013; Eligible Dec. 23, 2014) Madagascar (eligible Oct. 2, 2000; ineligible Jan. 1, 2010; eligibility regained June 26, 2014) Mali (eligible Oct. 2, 2000; ineligible Jan. 1, 2013; eligibility regained Dec. 23, 2013) Inflation-adjusted U.S. dollars (2005 base year) In addition to the person named above, Christine Broderick (Assistant Director), Ming Chen (Assistant Director), Rhonda M. Horried (Analyst-in- Charge), Michael Hoffman, John O’Trakoun, Qahira El’Amin, Giselle Cubillos-Moraga, Thomas Hitz, David Dayton, Oziel A. Trevino, Jill Lacey, and Ernie Jackson made significant contributions to this report. African Growth and Opportunity Act: USAID Could Enhance Utilization by Working with More Countries to Develop Export Strategies. GAO-15-218 Washington, D.C.: January. 22, 2015. Foreign Assistance: USAID Should Update Its Trade Capacity Building Strategy. GAO-14-602. Washington, D.C.: September 10, 2014. African Growth and Opportunity Act: Observations on Competitiveness and Diversification of U.S. Imports from Beneficiary Countries. GAO-14-722R. Washington, D.C.: July 21, 2014. Sub-Saharan Africa: Trends in U.S. and Chinese Economic Engagement. GAO-13-199. Washington, D.C.: February 7, 2013. Foreign Assistance: The United States Provides Wide-ranging Trade Capacity Building Assistance, but Better Reporting and Evaluation Are Needed. GAO-11-727. Washington, D.C.: July 29, 2011. U.S.-Africa Trade: Options for Congressional Consideration to Improve Textile and Apparel Sector Competitiveness under the African Growth and Opportunity Act. GAO-09-916. Washington, D.C.: August 12, 2009. International Trade: U.S. Trade Preference Programs: An Overview of Use by Beneficiaries and U.S. Administrative Reviews. GAO-07-1209. Washington, D.C.: September 27, 2007. Foreign Assistance: U.S. Trade Capacity Building Extensive, but Its Effectiveness Has Yet to Be Evaluated. GAO-05-150. Washington, D.C.: February 11, 2005. | Enacted in 2000 and set to expire in September 2015, AGOA is a trade preference program that seeks to promote economic development in 49 sub-Saharan African countries by allowing eligible countries to export qualifying goods to the United States without import duties. The act requires the U.S. government to conduct an annual eligibility review to assess each country's progress on economic, political, and development reform objectives in order to be eligible for AGOA benefits. AGOA also requires an annual forum to foster closer economic ties between the United States and sub-Saharan African countries. GAO was asked to review various issues related to AGOA's economic development benefits. In this report, GAO examines (1) how the AGOA eligibility review process has considered economic, political, and development reform objectives described in the act and (2) how sub-Saharan African countries have fared in certain economic development outcomes since the enactment of AGOA. GAO reviewed documents and interviewed officials from U.S. agencies to examine the relationship between the U.S. government's review process and AGOA reform criteria. GAO analyzed trends in economic development indicators for AGOA eligible and ineligible countries from 2001 to 2012, the latest year for which data were available for most countries. The U.S. government uses the annual eligibility review process required by the African Growth and Opportunity Act (AGOA) to engage with sub-Saharan African countries on their progress toward economic, political, and development reform objectives reflected in AGOA's eligibility criteria. Managed by the Office of the United States Trade Representative, the review process brings together officials from U.S. agencies each year to discuss the progress each country is making with regard to AGOA's eligibility criteria and to reach consensus as to which countries should be deemed eligible to receive AGOA benefits. Over the lifetime of AGOA, 13 countries have lost AGOA eligibility, although 7 eventually had it restored (see figure). To encourage reforms, the U.S. government will engage with countries experiencing difficulty meeting eligibility criteria and may specify measures a country can take. For example, U.S. officials met with Swaziland officials over several years to discuss steps to improve labor rights. However, Swaziland did not make the necessary reforms and lost eligibility effective in January 2015. GAO analyzed data on economic development indicators for sub-Saharan African countries that were eligible and ineligible for AGOA in 2012; the results showed that eligible countries fared better than ineligible countries on some economic measures since the enactment of AGOA. The extent to which this outcome is attributable to AGOA, however, is difficult to isolate after additional factors are taken into consideration. Other factors, such as the small share of AGOA exports in the overall exports of many AGOA-eligible countries, the role of petroleum exports in recent income growth, the quality of government institutions, and differences in levels of foreign aid and investment, make it difficult to isolate AGOA's contribution to overall economic development. For example, AGOA exports are a small share of overall exports for the majority of AGOA-eligible countries. GAO found evidence that increasing energy prices may also have contributed to income growth within AGOA-eligible petroleum-exporting countries. GAO also found that AGOA-eligible countries on average had higher governance scores and received more foreign aid and investment compared with ineligible countries. These differences may have contributed to economic development in AGOA-eligible countries, but they may also have been facilitated by AGOA, a possibility that makes it difficult to isolate AGOA's impact on economic development. GAO is not making any recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Since 1989, when the first drug court program was established, the number of drug court programs has increased substantially. In addition, DCPO’s oversight responsibilities and funding to support the planning, implementation, and enhancement of these programs have increased. As shown in figure 1, the number of operating drug court programs has more than tripled since our prior report from about 250 in 1997 to almost 800 in 2001 based on information available as of December 31, 2001. The number of operating programs that received DCPO funding, and thus were subject to its oversight, has also grown—from over 150 in fiscal year 1997 to over 560 through fiscal year 2001. As shown in figure 2, the number of drug court programs started by calendar year since our prior report has also increased. Although the number of drug court programs started in 2001 dropped, over 450 additional programs have been identified as being planned based on information available as of December 31, 2001. Based on information available as of December 31, 2001, drug court programs were operating in 48 states, the District of Columbia, and Puerto Rico. Only New Hampshire and Vermont had no operating drug court programs. Six states (California, Florida, Louisiana, Missouri, New York, and Ohio) accounted for over 40 percent of the programs. Appendix II provides information on the number of operating drug court programs in each state. Although there are basic elements common to many drug court programs, the programs vary in terms of approaches used, participant eligibility and program requirements, type of treatment provided, sanctions and rewards, and other practices. Drug court programs also target various populations (adults, juveniles, families, and Native American tribes). Appendix III provides details on the number of drug court programs by targeted population, and appendix IV provides details on the drug court programs by jurisdiction and the types of funding, if any, the programs have received from DCPO. Federal funding for drug court programs has also continued to increase. As shown in table 1, congressional appropriations for the implementation of DOJ’s drug court program has increased from about $12 million in fiscal year 1995 to $50 million in fiscal years 2001 and 2002. Since fiscal year 1995, Congress has appropriated about $267 million in Violent Crime Act related funding to DOJ for the federal drug court program. DCPO funding in direct support of drug court programs has increased from an average of about $9 million in fiscal years 1995 and 1996 to an average of about $31 million for fiscal years 1997 through 2001. Between fiscal years 1995 and 2001, DCPO has awarded about $174.5 million in grants to fund the planning, implementation, and enhancement of drug court programs. About $21.5 million in technical assistance, training, and evaluations grants were awarded. About $19.6 million were obligated for management and administration purposes and to fund nongrant technical assistance, training, and evaluation efforts. Since the inception of the DCPO drug court program, a total of $3 million in prior year recoveries have been realized. About $4.5 million through fiscal year 2001 had not been obligated. Congress appropriated an additional $50 million for fiscal year 2002. At the time of our review, DCPO was in the process of administering the fiscal year 2002 grant award program. Appendix V provides details on the number, amount, and types of grants DCPO awarded since the implementation of the federal drug court program. Since 1998, DCPO implementation and enhancement grantees have been required to collect, and starting in 1999, to submit to DCPO, among other things, performance and outcome data on program participants. DCPO collects these data semiannually using a Drug Court Grantee Data Collection Survey. This survey was designed by DCPO to ensure that grantees were collecting critical information about their drug court programs and to assist in the national evaluation of drug court programs. In addition, DOJ intended to use the information to respond to inquiries regarding the effectiveness of drug court programs. However, due to various factors, DCPO has not sufficiently managed the collection and utilization of these data. As a result, DOJ cannot provide Congress, drug court program stakeholders, and others with reliable information on the performance and impact of federally funded drug court programs. Various factors contributed to insufficiencies in DOJ’s drug court program data collection effort. These factors included (1) inability of DOJ to readily identify the universe of DCPO-funded drug court programs, including those subject to DCPO’s data collection reporting requirements; (2) inability of DOJ to accurately determine the number of drug court programs that responded to DCPO’s semiannual data collection survey; (3) inefficiencies in the administration of DCPO’s semiannual data collection effort; (4) the elimination of post-program impact questions from the scope of DCPO’s data collection survey effort; and (5) the insufficient use of the Drug Court Clearinghouse. DOJ’s grant management information system, among other things, tracks the number and dollar amount of grants the agency has awarded to state and local jurisdictions and Native American tribes to plan, implement, and enhance drug court programs. This system, however, is unable to readily identify the actual number of drug court programs DCPO has funded. Specifically, the system does not contain a unique drug court program identifier, does not track grants awarded to a single grantee but used for more than one drug court program, and contains data entry errors that impact the reliability of data on the type of grants awarded. For example, at the time of our review, the system contained some incorrectly assigned grant numbers, did not always identify the type of grant awarded, and incorrectly identified several grantees as receiving a planning, implementation, and enhancement grant in fiscal year 2000. These factors made it difficult for DCPO to readily produce an accurate universe of the drug court programs that had received DCPO funding and were subject to DCPO’s data collection reporting requirement. Although DOJ has been able to provide information to enable an estimate of the universe of DCPO-funded drug court programs to be derived, the accuracy of this information is questionable because DCPO has relied on the Drug Court Clearinghouse to determine the number of DCPO-funded drug court programs and their program implementation dates. One of the Drug Court Clearinghouse’s functions has been to identify DCPO-funded drug court programs. However, the Drug Court Clearinghouse has only been tasked since 1998 with following up with a segment of DCPO grantees to determine their implementation date. Thus, the information provided to DCPO on the universe of DCPO-funded drug court programs is at best an estimate and not a precise count of DCPO drug court program grantees. Noting that its current grant information system was not intended to readily identify and track the number of DCPO-funded drug court programs, DCPO officials said that they plan to develop a new management information system that will enable DOJ to do so. Without an accurate universe of DCPO-funded drug court programs, DCPO is unable to readily determine the actual number of programs or participants it has funded or, as discussed below, the drug court programs that should have responded to its semiannual data collection survey. According to DCPO officials, grantee response rates to DCPO’s semiannual survey have declined since DCPO began administering the survey in 1998. As shown in figure 3, the information in DCPO’s database indicated that grantee response rates declined from about 78 percent for the first survey reporting period (July to Dec. 1998) to about 32 percent for the July to December 2000 reporting period. However, results from our follow-up structured interviews with a representative sample of the identifiable universe of drug court programs that were DCPO grantees during the 2000 reporting periods revealed that DCPO did not have an accurate account of grantees’ compliance with its semiannual data collection survey. Based on our structured interviews, we estimate that the response rate to the DCPO data collection survey for the January to June 2000 reporting period was about 60 percent in contrast to the 39 percent response rate DCPO reported. Similarly, the response rate to the DCPO survey for the July to December 2000 reporting period was about 61 percent in contrast to the 32 percent response rate DCPO reported. The remaining programs did not respond or were uncertain as to whether they responded to DCPO’s data collection survey for each of the reporting periods in 2000. DOJ officials said that some of the surveys they did not receive may have been mailed to an incorrect office within DOJ. DCPO officials acknowledged that this type of error could be mitigated if DCPO routinely followed up with the drug court programs from which they did not receive responses. Furthermore, based on our follow-up structured interviews with a representative sample of DCPO-funded drug court programs that were listed as nonrespondents in DCPO’s database, we estimate that about 61 percent had actually responded to DCPO’s survey for the January to June 2000 reporting period. About two-thirds of these programs could produce evidence that they responded. For the July to December 2000 reporting period, we estimate that about 51 percent of the DCPO-funded drug court programs that were listed as nonrespondents in DCPO’s database had actually responded to the survey. About two-thirds of these programs could produce evidence that they responded. The requirement for grantees to submit DCPO’s semiannual survey is outlined in DOJ’s grant award notification letter that drug court program grantees receive at the beginning of their grant period. In addition, the survey is made available in the grantee application kit as well as on DCPO’s website. However, other than these steps, DCPO has not consistently notified its drug court program grantees of the semiannual reporting requirements nor has it routinely forwarded the survey to grantees. At the time of our review, DCPO had taken limited action to improve grantees’ compliance with the data collection survey requirements. DCPO officials said that they generally had not followed up with drug court program grantees that did not respond to the survey and had not taken action towards the grantees that did not respond to the semiannual data collection reporting requirement. Results from our follow-up structured interviews showed that DCPO had not followed up to request completed surveys from about 70 percent of the drug court program grantees that were nonrespondents during the January to June 2000 reporting period and from about 76 percent of the nonrespondents for the July to December 2000 reporting period. DCPO has had other difficulties managing its data collection effort. Specifically, (1) DCPO inadvertently instructed drug court program grantees not to respond to questions about program participants’ criminal recidivism while in the program; (2) confusion existed between DCPO and its contractor, assigned responsibility for the semiannual data collection effort, over who would administer DCPO’s data collection survey during various reporting periods; and (3) some grantees were using different versions of DOJ’s survey instruments to respond to the semiannual data collection reporting requirement. The overall success of a drug court programs is dependent on whether defendants in the program stay off drugs and do not commit more crimes when they complete the program. In our 1997 report we recommended that drug court programs funded by discretionary grants administered by DOJ collect and maintain follow-up data on program participants’ criminal recidivism and, to the extent feasible, follow-up data on drug use relapse. In 1998, DCPO required its implementation and enhancement grantees to collect and provide performance and outcome data on program participants, including data on participants’ criminal recidivism and substance abuse relapse after they have left the program. However, in 2000, DCPO revised its survey and eliminated the questions that were intended to collect post-program outcome data. The DCPO Director said that DCPO’s decision was based on, among other things, drug court program grantees indicating that they were not able to provide post-program outcome data and that they lacked sufficient resources to collect such data. DCPO, however, was unable to produce specific evidence from grantees (i.e., written correspondence) that cited difficulties with providing post-program outcome data. The Director said that difficulties have generally been conveyed by grantees, in person, through telephone conversations, or are evidenced by the lack of responses to the post-program questions on the survey. Contrary to DCPO’s position, evidence exists that supports the feasibility of collecting post-program performance and outcome data. During our 1997 survey of the drug court programs, 53 percent of the respondents said that they maintained follow-up data on participants’ rearrest or conviction for a nondrug crime. Thirty-three percent said that they maintained follow-up data on participants’ substance abuse relapse. Recent information collected from DCPO grantees continues to support the feasibility of collecting post-program performance and outcome data. The results of structured interviews we conducted in the year 2001 with a representative sample of DCPO-funded drug court programs showed that an estimated two-thirds of the DCPO-funded drug court programs maintained criminal recidivism data on participants after they left the program. About 84 percent of these programs maintained such data for 6 months or more. Of the remaining one-third that did not maintain post- program recidivism data, it would be feasible for about 63 percent to provide such data. These estimates suggest that about 86 percent of DCPO- funded drug court programs would be able to provide post-program recidivism data if requested. The results of structured interviews we conducted in the year 2001 with a representative sample of DCPO-funded drug court programs also showed that about one-third of the DCPO-funded drug court programs maintained substance abuse relapse data on participants after they have left the program. About 84 percent of these programs maintained such data for 6 months or more. Of the estimated two-thirds that did not maintain post- program substance abuse relapse data, it would be feasible for about 30 percent to provide such data. These estimates suggest that about 50 percent of DCPO-funded drug court programs would be able to provide post-program substance abuse data if requested. According to survey results collected by the Drug Court Clearinghouse in 2000 and 2001, a significant number of the drug court programs were able to provide post-program outcome data. For example, about 47 percent of the DCPO-funded adult drug court programs that responded to the Drug Court Clearinghouse’s 2000 operational survey reported that they maintained some type of follow-up data on program participants after they have left the program. Of these drug court programs, about 92 percent said that they maintained follow-up data on recidivism and about 45 percent said that they maintained follow-up data on drug usage. Of the DCPO-funded adult and juvenile drug court programs operating for at least a year that responded to the Drug Court Clearinghouse’s annual survey that was published in 2001, about 56 percent were able to provide follow-up data on program graduates’ recidivism and about 55 percent were able to provide follow-up data on program graduates’ drug use relapse. Operating under a cooperative agreement with DCPO, the Drug Court Clearinghouse has successfully collected performance and outcome data through an annual survey of all operating adult, juvenile, family, and tribal drug court programs, including those funded by DCPO. In addition, as previously noted, the Drug Court Clearinghouse has generally administered an operational survey to adult drug court programs every 3 years, including those funded by DCPO. The Drug Court Clearinghouse annually disseminates the results from its annual survey and has periodically published comprehensive drug court survey reports that provide detailed operational, demographic, and outcome data on the adult drug court programs identified through its data collection efforts. Although funded by DOJ, the Drug Court Clearinghouse has not been required to primarily collect and report separately on the universe of DCPO-funded programs. In addition, no comprehensive or representative report has been produced by DCPO or the Drug Court Clearinghouse that focuses primarily on the performance and outcome of DCPO-funded drug court programs. Instead, DCPO instructed the Drug Court Clearinghouse, in July 2001, to eliminate recidivism data from its survey publications. Although the Drug Court Clearinghouse has developed and implemented survey instruments to periodically collect and disseminate recidivism and relapse data, the DCPO Director had concerns with the quality of the self-reported data collected and the inconsistent time frames for which post-program data were being collected by drug court programs. In response to recommendations in our 1997 report, DOJ undertook, through NIJ, an effort to conduct a two-phase national impact evaluation focusing on 14 selected DCPO-funded drug court programs. This effort was intended to include post-program data within its scope and to involve the use of nonparticipant comparison groups. However, various administrative and research factors hampered DOJ’s ability to complete the NIJ-sponsored national impact evaluation, which was originally to be completed by June 30, 2001. As a result, DOJ fell short of its objective, discontinued this effort, and is considering an alternative study that, if implemented, is not expected to provide information on the impact of federally funded drug court programs until year 2007. Unless DOJ takes interim steps to evaluate the impact of drug court programs, the Congress, the public, and other drug court stakeholders will not have sufficient information in the near term to assess the overall impact of federally funded drug court programs. The overall objective of the NIJ-sponsored national evaluation was to study the impact of DCPO-funded drug court programs using comparison groups and studying, among other things, criminal recidivism and drug use relapse. This effort was to be undertaken in two phases and to include the collection of post-program outcome data. The objectives for phase I, for which NIJ awarded a grant to RAND in August 1998, were to (1) develop a conceptual framework for evaluating the 14 DCPO-funded drug court programs, (2) provide a description of the implementation of each program, (3) determine the feasibility of including each of these 14 drug court programs in a national impact evaluation, and (4) develop a viable design strategy for evaluating program impact and the success of the 14 drug court programs. The design strategy was to be presented in the form of a written proposal for a supplemental noncompetitive phase II grant. The actual impact evaluation and an assessment of the success of the drug court programs were to be completed during phase II of the study using a design strategy resulting from phase I. NIJ’s two-phase national impact evaluation was originally planned for completion by June 30, 2001. Phase I was awarded for up to 24 months and was scheduled to conclude no later than June 30, 2000. However phase I was not completed until September 2001—15 months after the original project due date. Phase II, which NIJ expected to award after the satisfactory submission of a viable design strategy for completing an impact evaluation, has since been discontinued. Various administrative and research factors contributed to delays in the completion of phase I and DOJ’s subsequent decision to discontinue the evaluation. The factors included (1) DCPO’s delay in notifying its grantees of RAND’s plans to conduct site visits; (2) RAND’s lateness in meeting task milestones; (3) NIJ’s multiple grant extensions to RAND that extended the timeframe for completing phase I and further delayed NIJ’s subsequent decision to discontinue phase II; and (4) the inability of the phase I efforts to produce a viable design strategy that was to be used to complete a national impact evaluation in phase II. Phase I of the NIJ-sponsored study was initially hampered by DCPO’s delay in notifying its grantees of plans to conduct the national impact evaluation. In November 1998, DCPO agreed to write a letter notifying its grantees of RAND’s plan to conduct the national evaluation. The notification letters were sent in March 1999. As a result, drug court program site visits, which RAND had originally planned to complete by February 1999, were not completed until July 1999. Although RAND completed most of the tasks associated with the national evaluation phase I objectives, it was generally late in meeting task milestones. The conceptual framework for the evaluation of 14 DCPO- funded drug court programs, which RAND was originally scheduled to complete by September 1999, was submitted to NIJ in May 2000—8 months after the original task milestone. This timeframe, according to RAND, was impacted by the delay in DOJ’s initiation of site visits. NIJ officials said that RAND also did not deliver a complete description and analysis of drug court implementation issues to NIJ, which was also due in September 1999, until it received the first draft of RAND’s report in March 2001. The feasibility study, which was originally scheduled to be completed by RAND in September 1999, was provided to NIJ in November 1999. This study informed NIJ of RAND’s concerns with the evaluability of some of the 14 selected DCPO sites. The viable design strategy proposal for evaluating program impact at each of the 14 drug court programs, which RAND was originally expected to complete by May 1999, was not completed. In addition, as discussed below and detailed in appendix VI, RAND was consistently late in meeting the extended milestones for delivery of the final product for phase I. Although RAND raised concerns in November 1999 regarding the feasibility of completing a national impact evaluation at some of the 14 selected DCPO sites, NIJ continued to grant multiple no-cost extensions that further extended the completion of phase I. The first no-cost grant extension called for phase I of the project to end by September 30, 2000; the second no-cost extension called for phase I to end by December 31, 2000; and the final extension authorized completion of phase I by May 31, 2001. Despite the multiple extensions and RAND’s repeated assurances that the phase I report was imminent, a final phase I report was not completed until September 18, 2001—21 months after the original milestone for completion of phase I. NIJ officials said that, in retrospect, they should have discontinued this effort sooner. Appendix VI provides additional details on the phase I delays in the NIJ-sponsored effort to complete a national impact evaluation. Phase I of the NIJ-sponsored national impact evaluation did not produce a viable design strategy that would enable an impact evaluation to be completed during phase II using the selected DCPO-funded drug court programs. RAND did offer an alternative approach. However, this approach did not address the original objective—to conduct a national impact evaluation. During its feasibility study, RAND rated the evaluability of the 14 program sites as follows: 4 - poor or neutral/poor, 5 - neutral, and 5 - neutral/good or good. In response, NIJ and DCPO asked RAND to consider completing the evaluation using those DCPO-funded program sites that were deemed somewhat feasible. RAND, however, was not receptive to this suggestion and did not produce a viable design strategy based on the 14 DCPO-funded programs or the subset of DCPO-funded programs that were deemed feasible to use in phase II to evaluate the impact of federally funded drug court programs. As a result, DOJ continues to lack a design strategy for conducting a national impact to enable it to address the impact of federally funded drug court programs in the near term. To address the need for the completion of a national impact evaluation, DCPO and NIJ are considering plans to complete a longitudinal study of drug-involved offenders in up to 10 drug court program jurisdictions. The DCPO Director said that the study would be done at a national level, and the scope would include comparison groups and the collection of individual level and post-program recidivism data. DOJ expects that this project, which is in its formative stage, if implemented, will take up to 4 years to complete—with results likely in year 2007. We recognize that it would take time to design and implement a rigorous longitudinal evaluation study and that if properly implemented, such an effort should better enable DOJ to provide information on the overall impact of federally funded drug court programs. However, its year 2007 completion timeframe will not enable DOJ to provide the Congress and other stakeholders with near-term information on the overall impact of federally funded drug court programs that has been lacking for nearly a decade. Despite a significant increase in the number of drug court programs funded by DCPO since 1997 that are required to collect and maintain performance and outcome data, DOJ continues to lack vital information on the overall impact of federally funded drug court programs. Furthermore, the agency’s alternative plan for addressing the impact of federally funded drug court programs will not offer near-term answers on the overall impact of these programs. Improvements in DCPO’s management of the collection and utilization of performance and outcome data from federally funded drug court programs are needed. Additionally, more immediate steps from NIJ and DCPO to carry out a methodologically sound national impact evaluation could better enable DOJ to provide Congress and other drug court program stakeholders with more timely information on the overall impact of federally funded drug court programs. Until DOJ takes such actions, the Congress, public, and other stakeholders will continue to lack sufficient information to (1) measure long-term program benefits, if any; (2) assess the impact of federally funded drug court programs on the criminal behavior of substance abuse offenders; or (3) assess whether drug court programs are an effective use of federal funds. To improve the Department of Justice’s collection of data on the performance and impact of federally funded drug court programs, we recommend that the Attorney General develop and implement a management information system that is able to track and readily identify the universe of drug court programs funded by DCPO; take steps to ensure and sustain an adequate grantee response rate to DCPO’s data collection efforts by improving efforts to notify and remind grantees of their reporting requirements; take corrective action towards grantees who do not comply with DOJ’s data collection reporting requirements; reinstate the collection of post-program data in DCPO’s data collection effort, selectively spot checking grantee responses to ensure accurate reporting; analyze performance and outcome data collected from grantees and report annually on the results; and consolidate the multiple DOJ-funded drug court program-related data collection efforts to better ensure that the primary focus is on the collection and reporting of data on DCPO-funded drug court programs. To better ensure that needed information on the impact of federally funded drug court programs is made available to the Congress, public, and other drug court stakeholders as early as possible, we also recommend that the Attorney General take immediate steps to accelerate the funding and implementation of a methodologically sound national impact evaluation and to consider ways to reduce the time needed to provide information on the overall impact of federally funded drug court programs. Furthermore, we recommend that steps be taken to implement appropriate oversight of this evaluation effort to ensure that it is well designed and executed, and remains on schedule. We requested comments on a draft of this report from the Attorney General. We also requested comments from RAND on a section of the draft report pertaining to its efforts to complete phase I of NIJ’s national evaluation effort. On April 3, 2002, DOJ provided written comments on the draft report (see app. VII). The Assistant Attorney General for the Office of Justice Programs noted that we made several valuable recommendations for improving the collection of data on the performance and impact of federally funded drug court programs and outlined steps DOJ is considering to address two of the six recommendations we make for improving its collection of data on the performance and impact of federally funded drug court programs. However, concerning the remaining four recommendations for improving DOJ’s data collection effort, DOJ does not specifically outline any plans (1) for taking corrective action towards grantees who do not comply with DCPO’s data collection reporting requirements; (2) to reinstate the collection of post program data in DCPO’s data collection effort, despite the evidence cited in our report supporting the feasibility of collecting post program data; (3) to analyze and report results on the performance and outcome of DCPO grantees; and (4) to consolidate the multiple DOJ-funded drug court program-related data collection efforts to ensure that the primary focus of any future efforts is on the collection and reporting of data on DCPO-funded programs. Although DOJ points out in its comments that a number of individual program evaluation studies have been completed, no national impact evaluation of these programs has been done to date. We continue to believe that until post-program follow-up data on program participants are collected across a broad range of programs and also included within the scope of future program and impact evaluations (including nonprogram participant data), it will not be possible to reach firm conclusions about whether drug court programs are an effective use of federal funds or whether different types of drug court program structures funded by DCPO work better than others. Also, unless these results are compared with those on the impact of other criminal justice programs, it will not be clear whether drug court programs are more or less effective than other criminal justice programs. As such, these limitations have prevented firm conclusions from being drawn on the overall impact of federally funded drug court programs. With respect to our recommendations for improving DOJ’s drug court program-related impact evaluation efforts, DOJ, in its comments, outlines steps it is taking to complete a multisite impact evaluation and its plans to monitor the progress of this effort and to provide interim information during various intervals. As discussed on page 18 of this report, this effort is intended to be done at a national level, and the scope is to include comparison groups and the collection of individual-level and post-program recidivism data. On April 1, 2002, RAND provided written comments on the segment of the draft report relating to DOJ’s efforts to complete a national impact evaluation (see app. VIII). In its comments, RAND, as we do in our report, acknowledges the need for improvements in the data collection infrastructure for DCPO-funded drug court programs. RAND notes its rationale for why it views the deliverables associated with phase I of the NIJ-sponsored national impact evaluation as being timely and notes that researchers generally have discretion to revise timelines and scopes of work, with the agreement of the client. However, as we point out in our report (pp. 17-18 and app. VI), RAND requested several no-cost extensions to complete the deliverables for various task milestones and did not produce a viable design strategy for addressing the impact of DCPO-funded drug court programs. In addition, NIJ officials said that RAND also did not deliver a complete description and analysis of drug court implementation issues to NIJ until it received the first draft of RAND’s report in March 2001. The deliverable RAND refers to in its comment letter was a paper that RAND had prepared for the National Institute on Drug Abuse, which NIJ never considered to be a product under the grant to evaluate the impact of DCPO-funded drug court programs. As we also pointed out in our report (p. 17 and app. VI), NIJ was not amenable to RAND changing the scope or methodology of the national impact evaluation effort. In addition, RAND commented that a “simple” evaluation design was expected. NIJ’s original objective, however, never called for a simple evaluation design, but rather a viable design strategy involving the use of comparison groups and the collection of post-program data. We conducted our work at DOJ headquarters in Washington, D.C., between March 2001 and February 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will provide copies of this report to the Attorney General, the Director of the Office of Management and Budget, and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact Daniel C. Harris or me at (202) 512-2758 or at [email protected]. Key contributors to this report are acknowledged in appendix IX. Our overall objective for this review was to assess how well the Department of Justice (DOJ) has implemented efforts to collect performance and impact data on federally funded drug court programs. We specifically focused on DOJ’s (1) Drug Courts Program Office’s (DCPO) efforts to collect performance and outcome data from federally funded drug court programs and (2) National Institute of Justice’s (NIJ) efforts to complete a national impact evaluation of federally funded drug court programs. While there are drug court programs that receive funds from other federal sources, our review focused on those programs receiving federal funds from DCPO, which is DOJ’s component responsible for administering the federal drug court program under the Violent Crime Act. The scope of our work was limited to (1) identifying the processes DCPO used to implement its semiannual data collection effort; (2) determining DCPO grantees' compliance with semiannual data collection and reporting requirements; (3) determining what action, if any, DCPO has taken to monitor and ensure grantee compliance with the data collection reporting requirements; (4) identifying factors and barriers that may have contributed to a grantee's nonresponse and to delays in and the subsequent discontinuation of the NIJ-sponsored national evaluation of DCPO-funded programs; and (5) identifying improvements that may be warranted in DOJ's data collection efforts. To assess how well DCPO has implemented efforts to collect performance and outcome data from federally funded drug court programs, we (1) interviewed appropriate DOJ officials and other drug court program stakeholders and practitioners; (2) reviewed DCPO program guidelines to determine the drug court program grantee data collection and reporting requirements; (3) analyzed recent survey data collected by DCPO and the Drug Court Clearinghouse and Technical Assistance Project (Drug Court Clearinghouse) to obtain information on the number of drug court programs that have been able to provide outcome data; and (4) conducted structured interviews with a statistically valid probability sample of DCPO-funded drug court programs to determine (a) the programs' ability to comply with DCPO's data collection requirements, (b) whether the programs had complied with the data collection requirements, and (c) for those programs that did not comply with the data collection requirements, why they did not comply and what action, if any, DCPO had taken. For our structured interviews, we selected a stratified, random sample of 112 DCPO-funded drug court programs from a total of 315 drug court programs identified by DOJ as DCPO grantees in 2000. We stratified our sample into two groups based on whether the programs were listed in DCPO's database as respondents or nonrespondents to the required DCPO semiannual data collection survey in year 2000. To validate the accuracy of the list provided by DCPO, we compared the listing of 315 drug court programs identified as required to comply during a year 2000 reporting period with information on drug court program-related grant awards made by DCPO that was provided by OJP’s Office of the Comptroller to determine if the program was a DCPO grantee during the year 2000 reporting period. We defined a respondent as any drug court program grantee that was identified in DCPO's database as having responded to the DCPO survey during each applicable year 2000 reporting period. We defined a nonrespondent as a drug court program grantee that was identified in DCPO's database as not having responded to the DCPO survey during any applicable year 2000 reporting period. We used a structured data collection instrument to interview grantees. We interviewed 73 nonrespondents and 39 respondents. All results were weighted to represent the total population of drug court programs operating under a DCPO grant in year 2000. All statistical samples are subject to sampling errors. Measures of sampling error are defined by two elements, the width of the confidence intervals around the estimate (sometimes called the precision of the estimate) and the confidence level at which the intervals are computed. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. As each sample could have provided different estimates, we express our confidence level in the precision of our sample results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals based on the structured interviews will include the true value in the study population. All percentage estimates from the structured interviews have sampling errors of plus or minus 10 percentage points or less unless otherwise noted. For example, this means that if a percentage estimate is 60 percent and the 95 percent confidence interval is plus or minus 10 percentage points, we have 95 percent confidence that the true value in the population falls between 50 percent and 70 percent. We performed limited verification of the drug court programs in our sample that were identified as non-respondents in DCPO’s database to determine whether they were actually DCPO grantees in 2000. Data obtained from the drug court programs was self-reported and, except for evidence obtained to confirm grantee compliance with DCPO's year 2000 reporting requirements, we generally did not validate their responses. We also did not fully verify the accuracy of the total number of drug court programs, or universe of drug court programs, provided to us by DCPO and the Drug Court Clearinghouse. To assess DOJ's efforts to complete a national impact evaluation of federally funded drug court programs, we interviewed officials from (1) NIJ, who were responsible for DOJ's national evaluation effort; (2) DCPO, who were responsible for administering the federal drug court program under the Violent Crime Act; and (3) RAND, who were awarded the NIJ grant to complete phase I of the national evaluation effort. To identify the various administrative and research factors that hampered the completion of DOJ's national impact evaluation, we (1) interviewed NIJ and RAND officials who were responsible for the research project; (2) reviewed project objectives, tasks, and milestones outlined in NIJ's original solicitation and the NIJ approved RAND proposal and grant award; (3) reviewed correspondence between NIJ and RAND from 1998-2001; and (4) reviewed various project documents, including (a) RAND's evaluability assessment, (b) progress reports submitted to NIJ, (c) RAND's requests for no-cost extensions, (d) NIJ grant adjustment notices, (e) RAND's phase I draft report, and (f) RAND's phase I final report. Additionally, we compared project task milestones included in the NIJ approved RAND proposal with the actual project task completion dates. To determine the universe and DCPO funding of drug court programs, we (a) interviewed appropriate DOJ officials and other drug court program stakeholders and practitioners; (b) reviewed and analyzed grant information obtained from DOJ's Office of Justice Programs grant management information system and DCPO; (c) reviewed and analyzed information on the universe of drug court programs maintained by the Drug Court Clearinghouse; and (d) reviewed congressional appropriations and DOJ press releases. We attempted to verify information on the universe of DCPO-funded drug court programs, but as the findings in our report note, we were unable to do so due to inefficiencies in DOJ's drug court-related grant information systems. We were able to validate and correct some of the information provided by the various sources noted above through a comparison of the various databases noted and the primary data we had collected from drug court programs during our 1997 review and during our year 2001 follow-up structured interviews with a stratified, random sample of DCPO-funded drug court programs. We conducted our work at DOJ headquarters in Washington, D.C., between March 2001 and February 2002 in accordance with generally accepted government auditing standards. Based on information available as of December 31, 2001, drug court programs were operating in 48 states, the District of Columbia, and Puerto Rico. New Hampshire and Vermont were the only states without an operating drug court program but both have programs being planned. Guam also has programs being planned. California, Florida, Louisiana, Missouri, New York, and Ohio account for 344, or almost 44 percent, of the 791 operating drug courts. Figure 4 shows the number of operating drug court programs in each jurisdiction. Populations targeted by U.S. drug court programs included adults, juveniles, families, and Native American tribes. Table 2 shows the breakdown by target population of operating and planned drug court programs. As Table 3 shows, drug court programs in the United States vary by target population and program status and have received various types of grants from the DOJ Drug Courts Program Office (DCPO). Table 4 shows the number and total amount of DCPO grants awarded to plan, implement, or enhance U.S. drug court programs from fiscal years 1995 through 2001. NIJ issues solicitation for national evaluation of drug court programs NIJ awards grant to RAND RAND requests DCPO to write letters to 14 DCPO-funded sites regarding site visits for the national evaluation RAND submits written progress report to NIJ (no problems or changes were noted) Scheduled milestone for completion of site visits RAND informs NIJ that it was still awaiting DCPO introductory letter to 14 DCPO-funded sites DCPO sent letter notifying 14 sites of the national evaluation Scheduled milestone for completion of phase II design strategy Written progress report submitted by RAND (no problems or changes were noted) Scheduled milestone for completion of conceptual framework RAND provides evaluability assessment of 14 sites to NIJ noting feasibility concerns RAND requests conference with NIJ to discuss evaluability assessment NIJ informs RAND that DCPO still wants impact evaluations on some of the 14 sites RAND submits conceptual framework for 14 sites to NIJ NIJ and DCPO review the conceptual framework NIJ informs RAND that the report on the results of phase I must be submitted prior to the submission of a phase II proposal DCPO requests findings from RAND RAND requests guidance about conceptual framework paper RAND requests the first no-cost extension through September 30, 2000 NIJ informed RAND that phase I findings should be submitted in writing before RAND submits a proposal for phase II. RAND informed NIJ that a report on phase I findings would be completed by November 2000 RAND submits written progress report to NIJ noting their findings, an alternative strategy, and their request for a no-cost extension to enable RAND to bridge the time period between phase I and phase II NIJ grants RAND its first no-cost extension through September 30, 2000 DCPO and NIJ inquire about the status of the phase I draft report. NIJ reminds RAND of the original project requirements for an impact evaluation in phase II RAND inquired about whether the phase I grant would be extended beyond September 30, 2000 NIJ asked RAND to complete the phase I report by September 30, 2000, and reiterated to RAND that any proposals for phase II should address original solicitation objectives NIJ gives RAND the option to (1) let the phase I grant end and prepare the phase II proposal for a new grant or (2) extend the phase I project timeline to allow time for review of a phase II proposalRAND requested second no-cost extension NIJ grants no-cost extension to RAND extending completion of phase I until December 31, 2000. NIJ also inquires about status of draft and reminds RAND that draft must be submitted before a phase II proposal is accepted. RAND agreed RAND presented results from phase I at American Society of Criminology Conference noting that the phase I report would be available by the end of December In response to an NIJ inquiry, RAND informs NIJ that a phase I draft report would be completed by the end of January 2001 (NIJ did not extend the grant) In response to an NIJ inquiry, RAND informs NIJ that the phase I draft report would be completed in February 2001 Written progress report submitted by RAND noting that a draft report will be submitted to NIJ in February 2001 (no problems were noted) RAND informs NIJ that a draft phase I report will be completed in March 2001. NIJ grants third no-cost, extension to RAND extending completion of phase I until May 31, 2001 to allow for peer review of the forthcoming draft report NIJ receives draft phase I report and submits draft to peer reviewers NIJ informs RAND that phase II plans are uncertain NIJ sends peer review results to RAND and inquires as to when final report could be expected. NIJ provides RAND with specific instructions to eliminate the alternative phase II proposal from the finalphase I report noting that RAND's alternative proposal was so different from the project objective that it would be inappropriate to continue the effort RAND meets with NIJ to discuss phase I effort and completion of final report. RAND informs NIJ that the final report will be completed by the end of July 2001 Written progress report submitted by RAND (no problems or changes noted) The following are GAO comments on DOJ’s letter of April 3, 2002. 1. In his reviews, Dr. Belenko noted that the long-term post-program impact of drug courts on recidivism and other outcomes are less clear—pointing out that the measurement of post-program outcomes other than recidivism remains quite limited in the drug court evaluation literature. He also noted that the evaluations varied in quality, comprehensiveness, use of comparison groups, and types of measures used and that longer follow-up and better precision in equalizing the length of follow-up between experimental and comparison groups are needed. 2. Dr. Belenko noted that the evaluations reviewed were primarily process, as opposed to impact, evaluations. He also noted that a shortcoming of some of the drug court evaluations was a lack of specificity about data collection time frames—pointing out that several studies lacked a distinction between recidivism that occurs while an offender is under drug court supervision and recidivism occurring after program participation. Charles Michael Johnson, Nettie Y. Mahone, Deborah L. Picozzi, Jerome T. Sandau, David P. Alexander, Douglas M. Sloane, and Shana B. Wallace made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | In exchange for the possibility of dismissed charges or reduced sentences, defendants with substance abuse problems agree to be assigned to drug court programs. In drug courts, judges generally preside over the proceedings; monitor the progress of defendants; and prescribe sanctions and rewards in collaboration with prosecutors, defense attorneys, and treatment providers. Most decisions about drug court operations are left to local jurisdictions. Although programs funded by the Drug Court Program Office (DCPO) must collect and provide performance measurement and outcome data, the Department of Justice (DOJ) has not effectively managed this effort because of (1) its inability to readily identify the universe of DCPO-funded drug court programs, including those subject to DCPO's data collection reporting requirements; (2) its inability to accurately determine the number of drug court programs responding to DCPO's semiannual data collection survey; (3) inefficiencies in the administration of DCPO's semiannual data collection effort; (4) the elimination of post-program impact questions from the data collection survey effort; and (5) the lack of use of the Drug Court Clearinghouse. Various administrative and research factors have also hampered DOJ's ability to complete the two-phase National Institute of Justice-sponsored national impact evaluation study. As a result, DOJ continues to lack vital information needed to determine the overall impact of federally funded programs and to assess whether drug court programs use federal funds effectively. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. Planned federal IT spending in fiscal year 2009 totaled about $71 billion—of which $22 billion was planned for IT system development work, and the remainder was planned for operations and maintenance of existing systems. OMB plays a key role in overseeing federal agencies’ IT investments and how they are managed, stemming from its functions of assisting the President in overseeing the preparation of the federal budget and supervising budget preparation in executive branch agencies. In helping to formulate the President’s spending plans, OMB is responsible for evaluating the effectiveness of agency programs, policies, and procedures; assessing competing funding demands among agencies; and setting funding priorities. To carry out these responsibilities, OMB depends on agencies to collect and report accurate and complete information; these activities depend, in turn, on agencies having effective IT management practices. To drive improvement in the implementation and management of IT projects, Congress enacted the Clinger-Cohen Act in 1996, expanding the responsibilities delegated to OMB and agencies under the Paperwork Reduction Act. The Clinger-Cohen Act requires agencies to engage in performance- and results-based management, and to implement and enforce IT management policies and guidelines. The act also requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. Over the past several years, we have reported and testified on OMB’s initiatives to highlight troubled projects, justify IT investments, and use project management tools. We have made multiple recommendations to OMB and federal agencies to improve these initiatives to further enhance the oversight and transparency of federal IT projects. As a result, OMB recently used this body of work to develop and implement improved processes to oversee and increase transparency of IT investments. Specifically, in June 2009, OMB publicly deployed a Web site that displays dashboards of all major federal IT investments to provide OMB and others with the ability to track the progress of these investments over time. Given the size and significance of the government’s investment in IT, it is important that projects be managed effectively to ensure that public resources are wisely invested. Effectively managing projects entails, among other things, pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion so that managers have an accurate view of the program’s development status. Without meaningful and coherent cost and schedule information, program managers can have a distorted view of a program’s status and risks. To address this issue, in the 1960s, the Department of Defense (DOD) developed the EVM technique, which goes beyond simply comparing budgeted costs with actual costs. This technique measures the value of work accomplished in a given period and compares it with the planned value of work scheduled for that period and with the actual cost of work accomplished. Differences in these values are measured in both cost and schedule variances. Cost variances compare the value of the completed work (i.e., the earned value) with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a negative $1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the completed work with the value of the work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was budgeted to complete $10 million worth of work, there would be a negative $5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the program. Without knowing the planned cost of completed work and work in progress (i.e., the earned value), it is difficult to determine a program’s true status. Earned value allows for this key information, which provides an objective view of program status and is necessary for understanding the health of a program. As a result, EVM can alert program managers to potential problems sooner than using expenditures alone, thereby reducing the chance and magnitude of cost overruns and schedule slippages. Moreover, EVM directly supports the institutionalization of key processes for acquiring and developing systems and the ability to effectively manage investments—areas that are often found to be inadequate on the basis of our assessments of major IT investments. In August 2005, OMB issued guidance outlining steps that agencies must take for all major and high-risk development projects to better ensure improved execution and performance and to promote more effective oversight through the implementation of EVM. Specifically, this guidance directs agencies to (1) develop comprehensive policies to ensure that their major IT investments are using EVM to plan and manage development; (2) include a provision and clause in major acquisition contracts or agency in-house project charters directing the use of an EVM system that is compliant with the American National Standards Institute (ANSI) standard; (3) provide documentation demonstrating that the contractor’s or agency’s in-house EVM system complies with the national standard; (4) conduct periodic surveillance reviews; and (5) conduct integrated baseline reviews on individual programs to finalize their cost, schedule, and performance goals. Building on OMB’s requirements, in March 2009, we issued a guide on best practices for estimating and managing program costs. This guide highlights the policies and practices adopted by leading organizations to implement an effective EVM program. Specifically, in the guide, we identify the need for organizational policies that establish clear criteria for which programs are required to use EVM, specify compliance with the ANSI standard, require a standard product-oriented structure for defining work products, require integrated baseline reviews, provide for specialized training, establish criteria and conditions for rebaselining programs, and require an ongoing surveillance function. In addition, we identify key practices that individual programs can use to ensure that they establish a sound EVM system, that the earned value data are reliable, and that the data are used to support decision making. We have previously reported on the weaknesses associated with the implementation of sound EVM programs at various agencies, as well as on the lack of aggressive management action to correct poor cost and schedule performance trends based on earned value data for major system acquisition programs: In July 2008, we reported that the Federal Aviation Administration’s EVM policy was not fully consistent with best practices. For example, the agency required its program managers to obtain EVM training, but did not enforce completion of this training or require other relevant personnel to obtain this training. In addition, although the agency was using EVM to manage IT acquisitions, not all programs were ensuring that their earned value data were reliable. Specifically, of the three programs collecting EVM data, only one program adequately ensured that its earned value data were reliable. As a result, the agency faced an increased risk that managers were not getting the information they needed to effectively manage the programs. In response to our findings and recommendations, the Federal Aviation Administration reported that it had initiatives under way to improve its EVM oversight processes. In September 2008, we reported that the Department of the Treasury’s EVM policy was not fully consistent with best practices. For example, while the department’s policy addressed some practices, such as establishing clear criteria for which programs are to use EVM, it did not address others, such as requiring and enforcing EVM training. In addition, six programs at Treasury and its bureaus were not consistently implementing practices needed for establishing a comprehensive EVM system. For example, when executing work plans and recording actual costs, a key practice for ensuring that the data resulting from the EVM system are reliable, only two of the six investments that we reviewed incorporated government costs with contractor costs. As a result, we reported that Treasury may not be able to effectively manage its critical programs. In response to our findings and recommendations, Treasury reported that it would release a revised EVM policy and further noted that initiatives to improve EVM-related training were under way. In a series of reports and testimonies from September 2004 to June 2009, we reported that the National Oceanic and Atmospheric Administration’s National Polar-orbiting Operational Environmental Satellite System program was likely to overrun its contract at completion on the basis of our analysis of contractor EVM data. Specifically, the program had delayed key milestones and experienced technical issues in the development of key sensors, which we stated would affect cost and schedule estimates. As predicted, in June 2006 the program was restructured, decreasing its complexity, delaying the availability of the first satellite by 3 to 5 years, and increasing its cost estimate from $6.9 billion to $12.5 billion. However, the program has continued to face significant technical and management issues. As of June 2009, launch of the first satellite was delayed by 14 months, and our current projected total cost estimate is approximately $15 billion. We made multiple recommendations to improve this program, including establishing a realistic time frame for revising the cost and schedule baselines, developing plans to mitigate the risk of gaps in satellite continuity, and tracking the program executive committee’s action items from inception to closure. While the eight agencies we reviewed have established policies requiring the use of EVM on their major IT investments, none of these policies are fully consistent with best practices, such as standardizing the way work products are defined. We recently reported that leading organizations establish EVM policies that establish clear criteria for which programs are to use EVM; require programs to comply with the ANSI standard; require programs to use a product-oriented structure for defining work products; require programs to conduct detailed reviews of expected costs, schedules, and deliverables (called an integrated baseline review); require and enforce EVM training; define when programs may revise cost and schedule baselines (called require system surveillance—that is, routine validation checks to ensure that major acquisitions are continuing to comply with agency policies and standards. Table 1 describes the key components of an effective EVM policy. The eight agencies we reviewed do not have comprehensive EVM policies. Specifically, none of the agencies’ policies are fully consistent with all seven key components of an effective EVM policy. Table 2 provides a detailed assessment, by agency, and a discussion of the agencies’ policies follows the table. Criteria for implementing EVM on all IT major investments: Seven of the eight agencies fully defined criteria for implementing EVM on major IT investments. The agencies with sound policies typically defined “major” investments as those exceeding a certain cost threshold, and, in some cases, agencies defined lower tiers of investments requiring reduced levels of EVM compliance. Veterans Affairs only partially met this key practice because its policy did not clearly state whether programs or major subcomponents of programs (projects and subprojects) had to comply with EVM requirements. According to agency officials, this lack of clarity may cause EVM to be inconsistently applied across the investments. Without an established policy that clearly defines the conditions under which new or ongoing acquisition programs are required to implement EVM, these agencies cannot ensure that EVM is being appropriately applied on their major investments. Compliance with the ANSI standard: Seven of the eight agencies required that all work activities performed on major investments be managed by an EVM system that complies with industry standards. One agency, Transportation, partially met this key practice because its policy contained inconsistent criteria for when investments must comply with standards. Specifically, in one section, the policy requires a certain class of investments to adhere to a subset of the ANSI standard; however, in another section, the policy merely states that the investments must comply with general EVM principles. This latter section is vague and could be interpreted in multiple ways, either more broadly or narrowly than the specified subset of the ANSI standard. Without consistent criteria on investment compliance, Transportation may be unable to ensure that the work activities for some of its major investments are establishing sound EVM systems that produce reliable earned value data and provide the basis for informed decision making. Standard structure for defining the work products: DOD was the only agency to fully meet this key practice by developing and requiring the use of standard product-oriented work breakdown structures. Four agencies did not meet this key practice, while the other three only partially complied. Of those agencies that partially complied, National Aeronautics and Space Administration (NASA) policy requires mission (or space flight) projects to use a standardized product-oriented work breakdown structure; however, IT projects do not have such a requirement. NASA officials reported that they are working to develop a standard structure for their IT projects; however, they were unable to provide a time frame for completion. Homeland Security and Justice have yet to standardize their product structures. Among the agencies that did not implement this key practice, reasons included, among other things, the difficulty in establishing a standard structure for component agencies that conduct different types of work with varying complexity. While this presents a challenge, agencies could adopt an approach similar to DOD’s and develop various standard work structures based on the kinds of work being performed by the various component agencies (e.g., automated information system, IT infrastructure, and IT services). Without fully implementing a standard product-oriented structure (or structures), agencies will be unable to collect and share data among programs and may not have the information they need to make decisions on specific program components. Integrated baseline review: All eight agencies required major IT investments to conduct an integrated baseline review to ensure that program baselines fully reflect the scope of work to be performed, key risks, and available resources. For example, DOD required that these reviews occur within 6 months of contract award and after major modifications have taken place, among other things. Training requirements: Commerce was the only agency to fully meet this key practice by requiring and enforcing EVM training for all personnel with investment oversight and program management responsibilities. Several of the partially compliant agencies required EVM training for project managers—but did not extend this requirement to other program management personnel or executives with investment oversight responsibilities. Many agencies told us that it would be a significant challenge to require and enforce EVM training for all relevant personnel, especially at the executive level. Instead, most agencies have made voluntary EVM training courses available agencywide. However, without comprehensive EVM training requirements and enforcement, agencies cannot effectively ensure that programs have the appropriate skills to validate and interpret EVM data, and that their executives will be able to make fully informed decisions based on the EVM analysis. Rebaselining criteria: Three of the eight agencies fully met this key practice. For example, the Justice policy outlines acceptable reasons for rebaselining, such as when the baseline no longer reflects the current scope of work being performed, and requires investments to explain why their current plans are no longer feasible and to develop realistic cost and schedule estimates for remaining work. Among the five partially compliant agencies, Agriculture and Veterans Affairs provided policies, but in draft form; NASA was in the process of updating its policy to include more detailed criteria for rebaselining; and Homeland Security did not define acceptable reasons but did require an explanation of the root causes for cost and schedule variances and the development of new cost and schedule estimates. In several cases, agencies were unaware of the detailed rebaselining criteria to be included in their EVM policies. Until their policies fully meet this key practice, agencies face an increased risk that their executive managers will make decisions about programs with incomplete information, and that these programs will continue to overrun costs and schedules because their underlying problems have not been identified or addressed. System surveillance: All eight agencies required ongoing EVM system surveillance of all programs (and contracts with EVM requirements) to ensure their continued compliance with industry standards. For example, Agriculture required its surveillance teams to submit reports—to the programs and the Chief Information Officer—with documented findings and recommendations regarding compliance. Furthermore, the agency also established a schedule to show when EVM surveillance is expected to take place on each of its programs. Our studies of 16 major system acquisition programs showed that all agencies are using EVM; however, the extent of that implementation varies among the programs. Our work on best practices in EVM identified 11 key practices that are implemented on acquisition programs of leading organizations. These practices can be organized into three management areas: establishing a sound EVM system, ensuring reliable data, and using earned value data to make decisions. Table 3 lists these 11 key EVM practices by management area. Of the 16 case study programs, 3 demonstrated a full level of maturity in all three management areas; 3 had full maturity in two areas; and 4 had reached full maturity in one area. The remaining 6 programs did not demonstrate full levels of maturity in any of the management areas; however, in all but 1 case, they were able to demonstrate partial capabilities in each of the three areas. Table 4 identifies the 16 case study programs and summarizes our results for these programs. Following the table is a summary of the programs’ implementation of each key area of EVM program management responsibility. Additional details on the 16 case studies are provided in appendix II. Most programs did not fully implement the key practices needed to establish comprehensive EVM systems. Of the 16 programs, 3 fully implemented the practices in this program management area, and 13 partially implemented the practices. The Decennial Response Integration System, Next Generation Identification, and Surveillance and Broadcast System programs demonstrated that they had fully implemented the six practices in this area. For example, our analysis of the Decennial Response Integration System program schedule showed that activities were properly sequenced, realistic durations were established, and labor and material resources were assigned. The Surveillance and Broadcast System program conducted a detailed integrated baseline review to validate its performance baseline. It was also the only program to fully institutionalize EVM at the program level—meaning that it collects performance data on the contractor and government work efforts—in order to get a complete view into program status. Thirteen programs demonstrated that they partially implemented the six key practices in this area. In most cases, programs had work breakdown structures that defined work products to an appropriate level of detail and had identified the personnel responsible for delivering these work products. However, for all 13 programs, the project schedules contained issues that undermined the quality of their performance baselines. Weaknesses in these schedules included the improper sequencing of activities, such as incomplete or missing linkages between tasks; a lack of resources assigned to all activities; invalid critical paths (the sequence of activities that, if delayed, will impact the planned completion date of the project); and the excessive or unjustified use of constraints, which impairs the program’s ability to forecast the impact of ongoing delays on future planned work activities. These weaknesses are of concern because the schedule serves as the performance baseline against which earned value is measured. As such, poor schedules undermine the overall quality of a program’s EVM system. Other key weaknesses included the following examples: Nine programs did not adequately determine an objective measure of earned value and develop the performance baseline—that is, key practices most appropriately addressed through a comprehensive integrated baseline review, which none of them fully performed. For example, the Air and Space Operations Center—Weapon System program conducted an integrated baseline review in May 2007 to validate one segment of work contained in the baseline; however, the program had not conducted subsequent reviews for the remaining work because doing so would preclude staff from completing their normal work activities. Other reasons cited by the programs for not performing these reviews included the lack of a fully defined scope of work or management’s decision to use ongoing EVM surveillance to satisfy these practices. Without having performed a comprehensive integrated baseline review, programs have not sufficiently evaluated the validity of their baseline plan to determine whether all significant risks contained in the plan have been identified and mitigated, and that the metrics used to measure the progress made on planned work elements are appropriate. Four programs did not define the scope of effort using a work breakdown structure. For example, the Veterans Health Information Systems and Technology Architecture—Foundations Modernization program provided a list of its subprograms; however, it did not define the scope of the detailed work elements that comprise each subprogram. Without a work breakdown structure, programs lack a basis for planning the performance baseline and assigning responsibility for that work, both of which are necessary to accomplish a program’s objectives. Many programs did not fully ensure that their EVM data were reliable. Of the 16 programs, 7 fully implemented the practices for ensuring the reliability of the prime contractor and government performance data, and 9 partially implemented the practices. All 7 programs that demonstrated full implementation conduct monthly reviews of earned value data with technical engineering staff and other key personnel to ensure that the data are consistent with actual performance; perform detailed performance trend analyses to track program progress, cost, and schedule drivers; and make estimates of cost at completion. Four programs that we had previously identified as having schedule weaknesses (Farm Program Modernization; Joint Tactical Radio System—Handheld, Manpack, Small Form Fit; Juno; and Warfighter Information Network—Tactical) were aware of these issues and had sufficient controls in place to mitigate them in order to ensure that the earned value data are reliable. Nine programs partially implemented the three practices for ensuring that earned value data are reliable. In all cases, the program had processes in place to review earned value data (from monthly contractor EVM reports in all but one case), identify and record cost and schedule variances, and forecast estimates at completion. However, 5 of these programs did not adequately analyze EVM performance data and properly record variances from the performance baseline. For example, 2 programs did not adequately document justifications for cost and schedule variances, including root causes, potential impacts, and corrective actions. Other weaknesses in this area include anomalies in monthly performance reports, such as negative dollars being spent for work performed, which impacts the validity of performance data. In addition, 7 of these programs did not demonstrate that they could adequately execute the work plan and record costs because, among other things, they were unaware of the schedule weaknesses we identified and did not have sufficient internal controls in place to deal with these issues to improve the reliability of the earned value data. Lastly, 2 of these programs could not adequately forecast estimates at completion due, in part, to anomalies in the prime contractor’s EVM reports, in combination with the weaknesses contained in the project schedule. Programs were uneven in their use of earned value data to make decisions. Of the 16 programs, 9 fully implemented the practices for using earned value data for decision making, 6 partially implemented them, and 1 did not implement them. Among the 9 fully implemented programs, both the Automated Commercial Environment and Juno programs integrated their EVM and risk management processes to support the program manager in making better decisions. The Automated Commercial Environment program actively recorded risks associated with major variances from the EVM reports in the program’s risk register. Juno further used the earned value data to analyze threats against remaining management reserve and to estimate the cost impact of these threats. Six programs demonstrated limited capabilities in using earned value data for making decisions. In most cases, these programs included earned value performance trend data in monthly program management review briefings. However, the majority had processes for taking management action to address the cost and schedule drivers causing poor trends that were ad hoc and separate from the programs’ risk management processes—and, in most cases, the risks and issues found in the EVM reports did not correspond to the risks contained in the program risk registers. In addition, 4 of these programs were not able to adequately update the performance baseline as changes occurred because, in many cases, the original baseline was not appropriately validated. For example, the Mars Science Laboratory program just recently updated its performance baseline as part of a recent replan effort. However, without validating the original and current baselines with a project-level integrated baseline review, it is unclear whether the changes to the baseline were reasonable, and whether the risks assumed in the baseline have been identified and appropriately mitigated. One program (Veterans Health Information Systems and Technology Architecture—Foundations Modernization) was not using earned value data for decision making. Specifically, the program did not actively manage earned value performance trends, nor were these data incorporated into programwide management reviews. The inconsistent application of EVM across the investments exists in part because of the weaknesses we previously identified in the eight agencies’ policies, as well as a lack of enforcement of the EVM policy components already in place. For example, deficiencies in all three management areas can be attributed, in part, to a lack of comprehensive EVM training requirements—which was a policy component that most agencies did not fully address. The only 3 programs that had fully implemented all key EVM practices either had comprehensive training requirements in their agency EVM policy or enforced rigorous training requirements beyond that for which the policy called. Most of the remaining programs met the minimum requirements of their agencies’ policies. However, all programs that had attained full maturity in two management areas had also implemented more stringent training requirements, although none could match the efforts made on the other 3 programs. Without making this training a comprehensive requirement, these agencies are at risk that their major system acquisition programs will continue to have management and technical staff who lack the skills to fully implement key EVM practices. Our case study analysis also highlighted multiple areas in which programs were not in compliance with their agencies’ established EVM policies. This is an indication that agencies are not adequately enforcing program compliance. These policy areas include requiring EVM compliance at the start of the program, validating the baseline with an integrated baseline review, and conducting ongoing EVM surveillance. Until key EVM practices are fully implemented, selected programs face an increased risk that program managers cannot effectively optimize EVM as a management tool to mitigate and reverse poor cost and schedule performance trends. Earned value data trends of the 16 case study programs indicate that most are currently experiencing cost overruns and schedule slippages, and, based on our analysis, it is likely that when these programs are completed, the total cost overrun will be about $3 billion. To date, these programs, collectively, have already overrun their original life-cycle cost estimates by almost $2 billion (see table 5). Taking the current earned value performance into account, our analysis of the 16 case study programs indicated that most are experiencing shortfalls against their currently planned cost and schedule targets. Specifically, earned value performance data over a 12-month period showed that the 16 programs combined have exceeded their cost targets by $275 million. During that period, they also experienced schedule variances and were unable to accomplish almost $93 million worth of planned work. In most cases, the negative cost and schedule performance trends were attributed to ongoing technical issues in the development or testing of system components. Furthermore, our projections of future estimated costs at completion based on our analysis of current contractor performance trends indicate that these programs will most likely continue to experience cost overruns to completion, totaling almost $1 billion. In contrast, the programs’ contractors estimate the cost overruns at completion will be approximately $469.7 million. These estimates are based on the contractors’ assumption that their efficiency in completing the remaining work will significantly improve over what has been done to date. Furthermore, it should be noted that in 4 cases, the contractor-estimated overrun is smaller than the cost variances they have already accumulated—which is an indication that these estimates are aggressively optimistic. With the inclusion of the overruns already incurred to date, the total increase in life-cycle costs will be about $3 billion. Our analysis is presented in table 6. Additional details on the 16 case studies are provided in appendix II. Eleven programs are expected to incur a cost overrun at contract completion. In particular, two programs (i.e., the James Webb Space Telescope and Veterans Health Information Systems and Technology Architecture—Foundations Modernization programs) will likely experience a combined overrun of $798.7 million, which accounts for about 80 percent of our total projection. With timely and effective action taken by program and executive management, it is possible to reverse negative performance trends so that the projected cost overruns at completion may be reduced. To get such results, management at all levels could be strengthened, including contractor management, program office management, and executive-level management. For example, programs could strengthen program office controls and contractor oversight by obtaining earned value data weekly (instead of monthly) so that they can make decisions with immediate and greater impact. Additionally, key risks could be elevated to the program level and, if necessary, to the executive level to ensure that appropriate mitigation plans are in place and that they are tracked to closure. Key agencies have taken a number of important steps to improve the management of major acquisitions through the implementation of EVM. Specifically, the agencies have established EVM policies and require their major system acquisition programs to use EVM. However, none of the eight agencies that we reviewed have comprehensive EVM policies. Most of these policies omit or lack sufficient guidance on the type of work structure needed to effectively use EVM data and on the training requirements for all relevant personnel. Without comprehensive policies, it will be difficult for the agencies to gain the full benefits of EVM. Few of our 16 case study programs had fully implemented EVM capabilities, raising concerns that programs cannot efficiently produce reliable estimates of cost at completion. Many of these weaknesses found on these programs can be traced back to inadequate agency EVM policies and raise questions concerning the agencies’ enforcement of the policies already established, including the completion of the integrated baseline reviews and system surveillance. Until agencies expand and enforce their EVM policies, it will be difficult for them to optimize the effectiveness of this management tool, and they will face an increased risk that managers are not getting the information they need to effectively manage the programs. In addition to concerns about their implementation of EVM, the programs’ earned value data show trends toward cost overruns that are likely to collectively total about $3 billion. Without timely and aggressive management action, this projected overrun will be realized, resulting in the expenditure of over $1 billion more than currently planned. To address the weaknesses identified in agencies’ policies and practices in using EVM, we are making recommendations to the eight major agencies included in this review. Specifically, we recommend that the following three actions be taken by the Secretaries of the Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, Transportation, and Veterans Affairs and the Administrator of the National Aeronautics and Space Administration: modify policies governing EVM to ensure that they address the weaknesses that we identified, taking into consideration the criteria used in this report; direct key system acquisition programs to implement the EVM practices that address the detailed weaknesses that we identified in appendix II, taking into consideration the criteria used in this report; and direct key system acquisition programs to take action to reverse current negative performance trends, as shown in the earned value data, to mitigate the potential cost and schedule overruns. We provided the selected eight agencies with a draft of our report for review and comment. The Department of Homeland Security responded that it had no comments. The remaining seven agencies generally agreed with our results and recommendations. Agencies also provided technical comments, which we incorporated in the report as appropriate. The comments of the agencies are summarized in the following text: In e-mail comments on a draft of the report, officials from the U.S. Department of Agriculture’s Office of the Chief Information Officer stated that the department has begun to address the weaknesses in its EVM policy identified in the report. In written comments on a draft of the report, the Secretary of Commerce stated that, regarding the second and third recommendations, the Department of Commerce was pleased that the Decennial Response Integration System was found to have fully implemented all 11 key EVM practices, and that the Field Data Collection Automation program fully implemented six key practices. The department added that its recent actions on the Field Data Collection Automation program should move this program to full compliance with the key EVM practices. Furthermore, regarding the first recommendation, the Secretary stated that while the department understands and appreciates the value of standardized work breakdown structures, it maintained that the development of these work structures should take place at the department’s operating units (e.g., Census Bureau), given the wide diversity of missions and project complexity among these units. As noted in our report, we agree that agencies could develop standard work structures based on the kinds of work being performed by the various component agencies. Therefore, we support these efforts described by the department because they are generally consistent with the intent of our recommendation. Commerce’s comments are printed in appendix III. In written comments on a draft of the report, the Department of Defense’s Director of Defense Procurement and Acquisition Policy stated that the department concurred with our recommendations. Among other things, DOD stated that it is essential to maintain the appropriate oversight of acquisition programs, including the use of EVM data to understand program status and anticipate potential problems. DOD’s comments are printed in appendix IV. In written comments on a draft of the report, the Department of Justice’s Assistant Attorney General for Administration stated that, after discussion with our office, it was agreed that the second recommendation, related to implementing EVM practices that address identified weakness, was inadvertently directed to the department, and that no response was necessary. We agreed because the case study program reviewed fully met all key EVM practices. The department concurred with the two remaining recommendations related to modifying EVM policies and reversing negative performance trends. Furthermore, the Assistant Attorney General noted that Justice had begun to take steps to improve its use of EVM, such as modifying its policy to require EVM training for all personnel with investment oversight and program management responsibilities. Justice’s comments are printed in appendix V. In written comments on a draft of the report, the National Aeronautics and Space Administration’s Deputy Administrator stated that the agency concurred with two recommendations and partially concurred with one recommendation. In particular, the Deputy Administrator agreed that opportunities exist for improving the implementation of EVM, but stated that NASA classifies the projects included in the scope of the audit as space flight projects (not as IT-specific projects), which affects the applicability of the agency’s EVM policies and guidance that were reviewed. We recognize that different classifications of IT exist; however, consistent with other programs included in the audit, the selected NASA projects integrate and rely on various elements of IT. As such, we reviewed both the agency’s space flight and IT-specific guidance. Furthermore, the agency partially concurred with one recommendation because it stated that efforts were either under way or planned that will address the weaknesses we identified. We support the efforts that NASA described in its comments because they are generally consistent with the intent of our recommendation. NASA’s comments are printed in appendix VI. In e-mail comments on a draft of the report, the Department of Transportation’s Director of Audit Relations stated that the department is taking immediate steps to modify its policies governing EVM, taking into consideration the criteria used in the draft report. In written comments on a draft of the report, the Secretary of Veterans Affairs stated that the Department of Veterans Affairs generally agreed with our conclusions and concurred with our recommendations. Furthermore, the Secretary stated that Veterans Affairs has initiatives under way to address the weaknesses identified in the report. Veterans Affairs’ comments are printed in appendix VII. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretaries of the Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, Transportation, and Veterans Affairs; the Administrator of the National Aeronautics and Space Administration; and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our objectives were to (1) assess whether key departments and agencies have appropriately established earned value management (EVM) policies, (2) determine whether these agencies are adequately using earned value techniques to manage key system acquisitions, and (3) evaluate the earned value data of these selected investments to determine their cost and schedule performances. For this governmentwide review, we assessed eight agencies and 16 investments. We initially identified the 10 agencies with the highest amount of spending for information technology (IT) development, modernization, and enhancement work as reported in the Office of Management and Budget’s (OMB) Fiscal Year 2009 Exhibit 53. These agencies were the Departments of Agriculture, Commerce, Defense, Health and Human Services, Homeland Security, Justice, Transportation, the Treasury, and Veterans Affairs and the National Aeronautics and Space Administration. We excluded Treasury from our selection because we recently performed an extensive review of EVM at that agency. We also subsequently removed Health and Human Services from our selection because the agency did not have investments in system acquisition that met our dollar threshold (as defined in the following text). The resulting eight agencies also made up about 75 percent of the government’s planned IT spending for fiscal year 2009. To ensure that we examined significant investments, we chose from investments (related to system acquisition) that were expected to receive development, modernization, and enhancement funding in fiscal year 2009 in excess of $90 million. We limited the number of selected investments to a maximum of 3 per agency. For agencies with more than 3 investments that met our threshold, we selected the top 3 investments with the highest planned spending. For agencies with 3 or fewer such investments, we chose all of the investments meeting our dollar threshold. Lastly, we excluded investments with related EVM work already under way at GAO. To assess whether key agencies have appropriately established EVM policies, we analyzed agency policies and guidance for EVM. Specifically, we compared these policies and guidance documents with both OMB’s requirements and key best practices recognized within the federal government and industry for the implementation of EVM. These best practices are contained in the GAO cost guide. We also interviewed key agency officials to obtain information on their ongoing and future EVM plans. To determine whether these agencies are adequately using earned value techniques to manage key system acquisitions, we analyzed program documentation, including project work breakdown structures, project schedules, integrated baseline review briefings, risk registers, and monthly management briefings for the 16 selected investments. Specifically, we compared program documentation with EVM and scheduling best practices as identified in the cost guide. We determined whether the program implemented, partially implemented, or did not implement each of the 11 practices. We also interviewed program officials (and observed key program status review meetings) to obtain clarification on how EVM practices are implemented and how the data are used for decision-making purposes. To evaluate the earned value data of the selected investments to determine their cost and schedule performances, we analyzed the earned value data contained in contractor EVM performance reports obtained from the programs. To perform this analysis, we compared the cost of work completed with budgeted costs for scheduled work for a 12-month period to show trends in cost and schedule performances. We also used data from these reports to estimate the likely costs at completion through established earned value formulas. This resulted in three different values, with the middle value being the most likely. To assess the reliability of the cost data, we compared it with other available supporting documents (including OMB and agency financial reports); electronically tested the data to identify obvious problems with completeness or accuracy; and interviewed agency and program officials about the data. For the purposes of this report, we determined that the cost data were sufficiently reliable. We did not test the adequacy of the agency or contractor cost-accounting systems. Our evaluation of these cost data was based on what we were told by the agency and the information they could provide. We conducted this performance audit from February to October 2009 at the agencies’ offices in the Washington, D.C., metropolitan area; Fort Monmouth, New Jersey; Jet Propulsion Lab, Pasadena, California; Hanscom Air Force Base, Massachusetts; and Naval Base San Diego, California. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted case studies of 16 major system acquisition programs (see table 7). For each of these programs, the remaining sections of this appendix provide the following: a brief description of the program, including a graphic illustration of the investment’s life cycle; an assessment of the program’s implementation of the 11 key EVM practices; and an analysis of the program’s recent earned value (EV) data and trends. These data and trends are often described in terms of cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. Schedule variances are also measured in dollars, but they compare the earned value of the completed work with the value of the work that was expected to be completed. Positive variances are good—they indicate that activities are costing less than expected or are completed ahead of schedule. Negative variances are bad—they indicate activities are costing more than expected or are falling behind schedule. The following information describes the key that we used in tables 8 through 23 to convey the results of our assessment of the 16 case study programs’ implementation of the 11 EVM practices. The program fully implemented all EVM practices in this program management area. The program partially implemented the EVM practices in this program management area. The program did not implement the EVM practices in this program management area. The Farm Program Modernization (MIDAS) program is intended to address the long-term needs in delivering farm benefit programs via business process reengineering and implementation of a commercial off- the-shelf enterprise resource planning solution. MIDAS is an initiative of the Farm Service Agency, which is responsible for administering 35 farm benefit programs. To support these programs, the agency uses two primary systems—a distributed network of legacy computers and a centralized Web farm (to store customer data and host Web-based applications)—both of which have shortcomings. While MIDAS is to replace these computers, it is also intended to provide new applications and redesigned business processes. The Web farm is expected to remain in operation in a supporting role for the program. Currently, MIDAS is in the initiation phase of its life cycle and plans to award the system integration contract in the first quarter of fiscal year 2010. MIDAS fully met 6 of the 11 key practices for implementing EVM and partially met 5 practices. Specifically, a key weakness in the EVM system is the lack of a comprehensive integrated baseline review. Instead, MIDAS focused solely on evaluating the program’s compliance with industry standards and chose not to validate the quality of the baseline. Program officials stated that they plan to conduct a full review to address the risks and realism of the baseline after the prime contract has been awarded. Furthermore, while the MIDAS schedule is generally sound, resources were not assigned to all activities, and the critical path (the longest duration path through the sequenced list of key activities) could not be identified because the current schedule ends in September 2009. Finally, MIDAS met all key practices associated with data reliability, such as executing the work plan and recording costs, as well as all key practices for decision making. The Decennial Response Integration System (DRIS) is to be used during the 2010 Census for collecting and integrating census responses from all sources, including forms and telephone interviews. The system is to improve accuracy and timeliness by standardizing the response data and providing the data to other Census Bureau systems for analysis and processing. Among other things, DRIS is expected to process census data provided by respondents via census forms, telephone agents, and enumerators; assist the public via telephone; and monitor the quality and status of data capture operations. The DRIS program’s estimated life-cycle costs have increased by $372 million, which is mostly due to increases in both paper and telephone workloads. For example, the paper workload increased due to an April 2008 redesign of the 2010 Census that reverted planned automated operations to paper-based processes and requires DRIS to process an additional estimated 40 million paper forms. DRIS fully implemented all 11 of the key EVM practices necessary to manage its system acquisition program. Specifically, the program implemented all practices for establishing a comprehensive EVM system, such as defining the scope of work and scheduling the work. The program’s schedule appropriately captured and sequenced key activities and assigned realistic resources to all key activities. Furthermore, the DRIS team ensured that the resulting EVM data were appropriately verified and validated for reliability by analyzing performance data to identify the magnitude and effect of problems causing key variances, tracking related risks in the program’s risks register, and performing quality checks of the schedule and critical path. Lastly, the DRIS program management team conducted rigorous reviews of EV performance on a monthly basis and took the appropriate management actions to mitigate risks. The Field Data Collection Automation (FDCA) program is intended to provide automation support for the 2010 Census field data collection operations. The program includes the development of handheld computers for identifying and correcting addresses for all known living quarters in the United States (known as address canvassing) and the systems, equipment, and infrastructure that field staff will use to collect data. FDCA handheld computers were originally to be used for other census field operations, such as following up with nonrespondents through personal interviews. However, in April 2008, due to problems identified during testing and cost overruns and schedule slippages in the FDCA program, the Secretary of Commerce announced a redesign of the 2010 Census, and rebaselined FDCA in October 2008. As a result, FDCA’s life-cycle costs have increased from an estimated $596 million to $801 million, a $205 million increase. Furthermore, the responsibility for the design, development, and testing of IT systems for other key field operations was moved from the FDCA contractor to the Census Bureau. FDCA fully met 6 of the 11 key practices for implementing EVM and partially met 5 others. Specifically, the program fully met most practices for establishing a comprehensive EVM system, such as defining the scope of the work effort; however, it only partially met the practice for scheduling the work. Specifically, the program schedule contained weaknesses, including key milestones with fixed completion dates—which hampers the program’s ability to see the impact of delays experienced on open tasks on successor tasks. As such, the FDCA program cannot use the schedule as an active management tool. Furthermore, anomalies in the prime contractor’s EVM reports, combined with weaknesses in the master schedule, affect FDCA’s ability to execute the work plan, analyze variances, and make reliable estimates of cost at completion. Lastly, cost and schedule drivers identified in EVM reports were not fully consistent with the program’s risk register, which prevents the program from taking the appropriate management action to mitigate risks and effectively using EV data for decisions. The Air and Space Operations Center—Weapon System (AOC) is the air and space operations planning, execution, and assessment system for the Joint Force Air Component Commander. According to the agency, there are currently 11 AOCs located around the world, each aligned to the Combatant Commands of the Unified Command Plan, with additional support units for training, help desk, testing, and contingency manpower augmentation. Each AOC is designed to enable commanders to exercise command and control of air, space, information operations, and combat support forces to achieve the objectives of the joint force commander and combatant commander in joint and coalition military operations. As such, the AOC system is intended as the planning and execution engine of any air campaign. AOC fully met 7 of the 11 key practices and partially met 4 others. AOC applied EVM at the contract level and has a capable government team that has made it an integral part of project management. AOC performed detailed analyses of the EV data and reviews the data with engineering staff to ensure that the appropriate metrics have been applied for accurate reporting. AOC has also integrated EVM with its risk management processes to ensure that resources are applied to watch or mitigate risks associated with the cost and schedule drivers reported in the EVM reports. Weaknesses found in AOC’s EVM processes relate to the development and validation of the contractor baseline. In particular, AOC has not performed an integrated baseline review for all work that is currently on contract. The master schedule also contained issues, such as a high number of converging tasks and out-of-sequence tasks, that hamper AOC’s ability to determine the start dates of future tasks. Taken together, these issues undermine the reliability of the schedule as a baseline to measure EV performance. The Joint Tactical Radio System (JTRS) program is developing software- defined radios that are expected to interoperate with existing radios and increase communications and networking capabilities. The JTRS- Handheld, Manpack, Small Form Fit (HMS) product office, within the JTRS Ground Domain program office, is developing handheld, manpack, and small form fit radios. In 2006, the program was restructured to include two concurrent phases of development. Phase I includes select small form fit radios, while Phase II includes small form fit radios with enhanced security as well as handheld and manpack variants. Subsequent to the program’s restructure, the department updated its migration strategy for replacing legacy radios with new tactical radios. As such, the total planned quantity of JTRS-HMS radios was reduced from an original baseline of 328,514—established in May 2004—to 95,551. As a result, the total life- cycle cost of the JTRS-HMS program was reduced from an estimated $19.2 billion to $11.6 billion, a $7.6 billion decrease. JTRS-HMS fully met 10 of the 11 key practices and partially met 1 practice. Specifically, JTRS-HMS implemented most practices for establishing a comprehensive EVM system, such as performing rigorous reviews to validate the baseline; however, the current schedule contained some weaknesses, such as out-of-sequence logic and activities without resources assigned. Program officials were aware of these issues and attributed them to weaknesses in subcontractor schedules that are integrated on a monthly basis. The JTRS-HMS program fully met practices for ensuring that the resulting EV data were appropriately verified and validated for reliability and demonstrated that the program management team was using these data for decision-making purposes. The Warfighter Information Network—Tactical (WIN-T) program is designed to be the Army’s high-speed and high-capacity backbone communications network. The program connects Department of the Army units with higher levels of command and provides the Army’s tactical portion of the Global Information Grid—a Department of Defense initiative aimed at building a secure network and set of information capabilities modeled after the Internet. WIN-T was restructured in June 2007 following a unit cost increase above the critical cost growth threshold (known as a Nunn-McCurdy breach). As a result of the restructuring, it was determined that WIN-T would be fielded in four increments. The third increment is expected to provide the Army with a full networking on-the-move capability and fully support the Army’s Future Combat Systems. In May 2009, the Increment 3 program baseline was approved, and the life-cycle cost for the program was estimated at $38.2 billion. Our assessment of EVM practices and EV data was performed on WIN-T Increment 3. WIN-T fully met 7 of the 11 key practices for implementing EVM, partially met 1 practice, and did not meet 3 practices. Specifically, WIN-T only partially met the practices for establishing a comprehensive EVM system. The schedule contained weaknesses, including fixed completion dates— which prevented the schedule from showing the impact of delays experienced on open or successor tasks or the expected completion dates of key activities. Furthermore, WIN-T has not conducted an integrated baseline review on the current scope of work since rebaselining the prime contract in December 2007. According to program officials, this review has not been conducted because they have not yet finalized the contract. However, as of August 2009, it has been 20 months since work began, which increases the risk that the program has not been measuring progress against a reasonable baseline. Without conducting this review to validate the performance baseline, the baseline cannot be adequately updated as changes occur, and EV data cannot be used effectively for decision-making purposes. The Automated Commercial Environment (ACE) program is the commercial trade processing system being developed by the U.S. Customs and Border Protection to facilitate trade while strengthening border security. The program is to provide trade compliance and border security staff with the right information at the right time, while minimizing administrative burden. Deployed in phases, ACE is expected to be expanded to provide cargo processing capabilities across all modes of transportation and intended to replace existing systems with a single, multimodal manifest system for land, air, rail, and sea cargo. Ultimately, ACE is expected to become the central data collection system for the federal agencies that, by law, require international trade data, and should deliver these capabilities in a secure, paper-free, Web-enabled environment. As a result of poorly managed requirements, the total life- cycle development cost of the ACE program increased from an estimated $1.5 billion to $2.2 billion—a $700 million increase. ACE fully met 9 of the 11 key practices for implementing EVM and partially met the remaining 2 practices. Specifically, ACE fully met 5 of 6 practices for establishing a comprehensive EVM system, such as defining the scope of the work effort and developing the performance baseline, but partially met the practice for scheduling the work, in part, because resources were not assigned to all activities in the master schedule. ACE fully met 2 practices for ensuring that the data resulting from the EVM system were reliable, such as adequately analyzing EV performance data, but could not fully execute the work plan because of the weaknesses found in the schedule. Lastly, ACE demonstrated that the program management team was basing decisions on EVM data. It should be noted that the ACE program is being defined incrementally— whereby the performance baseline is continuously updated as task orders for new work are issued. As such, the use of EVM to determine the true progress made and to project reliable final costs at completion is limited. The Integrated Deepwater System is a 25-year, $24 billion major acquisition program to recapitalize the U.S. Coast Guard’s aging fleet of boats, airplanes, and helicopters, ensuring that all work together through a modern, capable communications system. This initiative is designed to enhance maritime domain awareness and enable the Coast Guard to meet its post-September 11 mission requirements. The program is composed of 15 major acquisition projects, including the Common Operational Picture (COP) program. Deepwater COP is to provide relevant, real-time operational intelligence and surveillance data to human capital managers, allowing them to direct and monitor all assigned forces and first responders. This is expected to allow commanders to distribute critical information to federal, state, and local agencies quickly; reduce duplication; enable earlier alerting; and enhance maritime awareness. Deepwater COP fully met 7 of the 11 key practices and partially met 4 others. Specifically, COP fully met 5 of the 6 practices for establishing a comprehensive EVM system, such as adequately defining all major elements of the work breakdown structure and developing the performance baseline. However, the program’s master schedule contained weaknesses, such as a large number of concurrent tasks and activities without resources assigned. Officials were aware of some, but not all, of the weaknesses in the schedule and had controls in place to mitigate the weakness they were aware of in order to improve the reliability of the resulting EV data. Lastly, COP was unable to fully meet 1 of the practices for using EV data for management decisions because it could not demonstrate that cost and schedule drivers impacting EV performance were linked to its risk management processes. The Western Hemisphere Travel Initiative (WHTI) program made modifications to vehicle processing lanes at ports of entry on the nation’s northern and southern borders. WHTI is designed to allow U.S. Customs and Border Protection to effectively address new requirements imposed by the Intelligence Reform and Terrorism Prevention Act of 2004 (completing these requirements by June 1, 2009). WHTI development was completed and its implementation addressed the 39 highest volume ports of entry, which support 95 percent of land border traffic. The initiative requires travelers to present a passport or other authorized travel document that denotes identity and citizenship when entering the United States. WHTI fully met 6 of the 11 key practices for implementing EVM and partially met the remaining 5 practices. Specifically, weaknesses identified in validating the performance baseline and scheduling the work limited the program’s ability to establish a comprehensive EVM system. Although the program held an integrated baseline review to validate the baseline in March 2008, the review did not cover many key aspects, such as identifying corrective actions needed to mitigate program risks. Furthermore, the master schedule contained deficiencies, such as activities that were out of sequence or lacking dependencies. While program officials described their use of processes for ensuring the reliability of the EVM system’s data, such as capturing significant cost and schedule drivers in the risk register, the provided documentation did not corroborate what we were told. When combined, these weaknesses preclude the program from effectively making decisions about the program based on EV data. The Next Generation Identification (NGI) program is designed to support the Federal Bureau of Investigation’s mission to reduce terrorist and criminal activities by providing timely, relevant criminal justice information to the law enforcement community. Today, the bureau operates and maintains one of the largest repositories of biometric- supported criminal history records in the world. The electronic identification and criminal history services support more than 82,000 criminal justice agencies, authorized civil agencies, and international organizations. NGI is intended to ensure that the bureau’s biometric systems are able to seamlessly share data that are complete, accurate, current, and timely. To accomplish this, the current system will be replaced or upgraded with new functionalities and state-of-the-art equipment. NGI is expected to be scaleable to accommodate five times the current workload volume with no increase in support manpower and will be flexible to respond to changing requirements. NGI fully implemented all 11 key EVM practices. Specifically, the program implemented all practices for establishing a comprehensive EVM system, such as defining the scope of work and scheduling the work. For example, the schedule properly captured key activities, established reasonable durations, and established a sound critical path, all of which contribute to establishing a reliable baseline that performance can be measured against. Furthermore, the NGI team ensured that the resulting EV data were appropriately verified and validated for reliability by, for example, integrating the analysis of cost and schedule variances with the program’s risk register to mitigate emerging and existing risks associated with key drivers causing major variances. In addition, the program’s risk register includes cost and schedule impacts for every risk and links to the management reserve process. Lastly, NGI demonstrated that it is using EV data to make decisions by performing continuous quality checks of the schedule, reviewing open risks and opportunities, and reviewing EV data in weekly management reports. The James Webb Space Telescope (JWST) is designed to be the scientific successor to the Hubble Space Telescope and expected to be the premier observatory of the next decade. It is intended to seek to study and answer fundamental astrophysical questions, ranging from the formation and structure of the Universe to the origin of planetary systems and the origins of life. The telescope is an international collaboration of the National Aeronautics and Space Administration (NASA), the Canadian Space Agency, and the European Space Agency. JWST required the development of several new technologies, including a folding segmented primary mirror that will unfold after launch and a cryocooler for cooling midinfrared detectors to 7 degrees Kelvin. JWST fully met 4 of the 11 key practices and partially met 7 practices. The project only partially met practices for establishing a comprehensive EVM system because of weaknesses in the work breakdown structure, in which the prime contractor has not fully defined the scope of each work element. In addition, the project only partially met the practice for scheduling work because of weaknesses resulting from manual integration of approximately 30 schedules, although officials did explain some mitigations for this risk. We also found deficiencies in the lower-level schedules, such as missing linkages between tasks, resources not being assigned, and excessively high durations. Furthermore, JWST only partially implemented practices to ensure that the data resulting from the EVM system are reliable, due, in part, to variance analysis reports being done quarterly (instead of monthly), which limits the project’s ability to analyze and respond to cost and schedule variances in a timely manner. When combined, these weaknesses preclude the program from effectively making decisions about the program based on EV data. Juno is part of the New Frontiers Program. The overarching scientific goal of the Juno mission is to improve our understanding of the origin and evolution of Jupiter. As the archetype of giant planets, Jupiter may provide knowledge that will improve our understanding of both the origin of our solar system and the planetary systems being discovered around other stars. The Juno project is expected to use a solar-powered spacecraft to make global maps of the gravity, magnetic fields, and atmospheric composition of Jupiter. The spacecraft is to make 33 orbits of Jupiter to sample the planet’s full range of latitudes and longitudes. Juno fully met 8 of the 11 key practices for implementing EVM and partially met 3 practices. Specifically, the project fully met 3 practices for establishing a comprehensive EVM system, but only partially met the practices for scheduling the work, determining the objective measure of earned value, and establishing the performance baseline. Juno was unable to fully meet these practices because the project’s master schedule contained issues with the sequencing of work activities and lacked a comprehensive integrated baseline review. Although an integrated baseline review was conducted for a major contract in February 2009, the program did not validate the baseline, scope of work to be performed, or key risks and mitigation plans for the Juno project as a whole, which increases the risk that the project is measuring performance against an unreasonable baseline. Juno fully implemented all 3 practices associated with data reliability and the 2 practices associated with using EV data for decision-making purposes. The Mars Science Laboratory (MSL) is part of the Mars Exploration Program. The program seeks to understand whether Mars was, is, or can be a habitable world. To answer this question, the MSL project is expected to investigate how geologic, climatic, and other processes have worked to shape Mars and its environment over time, as well as how they interact today. To accomplish this, the MSL project plans to place a mobile science laboratory on the surface of Mars to quantitatively assess a local site as a potential habitat for life, past or present. The project is considered one of NASA’s flagship projects and designed to be the most advanced rover ever sent to explore the surface of Mars. Due to technical issues identified during the development of key components, the MSL launch date has recently slipped 2 years—from September 2009 to October 2011, and the project’s life-cycle cost estimate has increased from about $1.63 billion to $2.29 billion, a $652 million increase. MSL fully met 5 of the 11 key practices and partially met 6 others. Specifically, MSL fully met 3 practices for establishing a comprehensive EVM system, but only partially met 3 others because of weaknesses in the sequencing of all activities in the schedule and the lack of an integrated baseline review to validate the baseline and assess the achievability of the plan. While the project has taken steps to mitigate the latter weakness by requiring work agreements that document, among other things, the objective value of work and related risks for planned work packages, this is not a comprehensive review of the project’s baseline. Furthermore, MSL only partially implemented practices associated with data reliability because its analysis of cost and schedule variances did not include the root causes for variances and corrective actions, which prevents the project from tracking and mitigating related risks. Lastly, without an initial validation of the performance baseline, the baseline cannot be appropriately updated to reflect program changes, thereby limiting the use of EV data for management decisions. The En Route Automation Modernization (ERAM) program is to replace existing software and hardware in the air traffic control automation computer system and its backup system, the Direct Radar Channel, and other associated interfaces, communications, and support infrastructure at en route centers across the country. This is a critical effort because ERAM is expected to upgrade hardware and software for facilities that control high-altitude air traffic. ERAM consists of two major components. One component has been fully deployed and is currently in operation at facilities across the country. The other component is scheduled for deployment through fiscal year 2011. ERAM fully met 7 of the 11 key practices and partially met 4 others. ERAM applies EVM at the contract level and incorporates EV data into its overall management of the program. However, ERAM did not perform a comprehensive review of the baseline when the contract was finalized, or take similar actions to validate the baseline and ensure that the appropriate EV metrics had been applied. While ERAM does perform limited checks of the contractor schedule, our analysis showed some issues with the sequencing of activities and the use of constraints that may undermine the reliability of the schedule as a baseline to measure performance. However, it should be noted that the EV data are not a reflection of the total ERAM program. The government is also responsible for acquisition work—to which EVM is not being applied. Our analysis of the master schedule showed that ERAM would be unable to meet four major upcoming initial operating capability milestones due to issues associated with government work activities. Program officials noted that these milestones have since been pushed out. Since EVM is not applied at the program level, it is unclear whether these delays will impact overall cost. The Surveillance and Broadcast System (SBS) is to provide new surveillance solutions that employ technology using avionics and ground stations for improved accuracy and update rates and to provide shared situational awareness (including visual updates of traffic, weather, and flight notices) between pilots and air traffic control. These technologies are considered critical to achieving the Federal Aviation Administration’s strategic goals of decreasing the rate of accidents and incursions, improving the efficiency of air traffic, and reducing congestion. SBS fully implemented all 11 key EVM practices. Specifically, SBS has institutionalized EVM at the program level—meaning that it collects and manages performance data on the contractor and government work efforts—in order to get a comprehensive view into program status. As part of this initiative, SBS performed detailed validation reviews of the contractor and program baselines; issued various process rules on resource planning, EV metrics, and data analysis; and collected government timecard data in order to ensure consistent EV application. In addition, the program management team conducted rigorous reviews of EV performance with the SBS program manger and the program’s internal management review board on a monthly basis. Our analysis of the SBS master schedule showed that it was developed in accordance with scheduling best practices. For example, the schedule was properly sequenced, and the resources were assigned. Furthermore, SBS briefed the program manager monthly on the quality of the schedule to identify, for example, tasks without predecessors. The Veterans Health Information Systems and Technology Architecture— Foundations Modernization (VistA-FM) program addresses the need to transition the Veterans Affairs electronic medical record system to a new architecture. According to the department, the current system is costly and difficult to maintain and does not integrate well with newer software packages. VistA-FM is designed to provide a new architectural framework as well as additional standardization and common services components. This is intended to eliminate redundancies in coding and support interoperability among applications. Ultimately, the new architecture will lay the foundation for a new generation of computer systems in support of caring for America’s veterans. During the course of our review, the department’s Chief Information Officer suspended multiple components of the VistA-FM program until a new development plan can be put in place. This action was taken as part of a new departmentwide initiative to identify troubled IT projects and improve their execution. VistA-FM partially met 4 key practices and did not meet 7 others, despite reporting compliance with the American National Standards Institute (ANSI) standard in its 2010 business case submission. Specifically, the program is still working to establish a comprehensive EVM system to meet ANSI compliance, among other things. For example, the work breakdown structure is organized around key program milestones instead of product deliverables, and does not fully describe the scope of work to be performed. Although the program’s subprojects maintain their own schedules, VistA-FM does not currently have an integrated master schedule at the program level. This is of concern because it is not possible to establish the program’s critical path and the time-phased budget baseline, a key component of EVM. The reliability of the data is also a potential issue because the program’s EVM reports do not offer adequate detail to provide insight into data reliability issues. Additionally, the performance baseline has not been appropriately updated; program officials stated this update is in progress, but they did not have a completion date. In addition to the contact name above, individuals making contributions to this report included Carol Cha (Assistant Director), Neil Doherty, Kaelin Kuhn, Jason Lee, Lee McCracken, Colleen Phillips, Karen Richey, Teresa Smith, Matthew Snyder, Jonathan Ticehurst, Kevin Walsh, and China Williams. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-09-326SP. Washington, D.C.: March 30, 2009. Discusses the Department of Defense’s Joint Tactical Radio System— Handheld, Manpack, Small Form Fit and Warfighter Information Network—Tactical programs. Information Technology: Census Bureau Testing of 2010 Decennial Systems Can Be Strengthened. GAO-09-262. Washington, D.C.: March 5, 2009. Discusses the Department of Commerce’s Decennial Response Integration System and Field Data Collection Automation programs. NASA: Assessments of Selected Large-Scale Projects. GAO-09-306SP. Washington, D.C.: March 2, 2009. Discusses the National Aeronautics and Space Administration’s James Webb Space Telescope and Mars Science Laboratory programs. Air Traffic Control: FAA Uses Earned Value Techniques to Help Manage Information Technology Acquisitions, but Needs to Clarify Policy and Strengthen Oversight. GAO-08-756. Washington, D.C.: July 18, 2008. Discusses the Department of Transportation’s En Route Automation Modernization and Surveillance and Broadcast System programs. Information Technology: Agriculture Needs to Strengthen Management Practices for Stabilizing and Modernizing Its Farm Program Delivery Systems. GAO-08-657. Washington, D.C.: May 16, 2008. Discusses the U.S. Department of Agriculture’s Farm Program Modernization program. Information Technology: Improvements for Acquisition of Customs Trade Processing System Continue, but Further Efforts Needed to Avoid More Cost and Schedule Shortfalls. GAO-08-46. Washington, D.C.: October 25, 2007. Discusses the Department of Homeland Security’s Automated Commercial Environment program. Defense Acquisitions: The Global Information Grid and Challenges Facing Its Implementation. GAO-04-858. Washington, D.C.: July 28, 2004. Discusses the Department of Defense’s Warfighter Information Network— Tactical program. | In fiscal year 2009, the federal government planned to spend about $71 billion on information technology (IT) investments. To more effectively manage such investments, in 2005 the Office of Management and Budget (OMB) directed agencies to implement earned value management (EVM). EVM is a project management approach that, if implemented appropriately, provides objective reports of project status, produces early warning signs of impending schedule delays and cost overruns, and provides unbiased estimates of anticipated costs at completion. GAO was asked to assess selected agencies' EVM policies, determine whether they are adequately using earned value techniques to manage key system acquisitions, and eval- uate selected investments' earned value data to determine their cost and schedule performances. To do so, GAO compared agency policies with best practices, performed case studies, and reviewed documenta- tion from eight agencies and 16 major investments with the highest levels of IT development-related spending in fiscal year 2009. While all eight agencies have established policies requiring the use of EVM on major IT investments, these policies are not fully consistent with best practices. In particular, most lack training requirements for all relevant personnel responsible for investment oversight. Most policies also do not have adequately defined criteria for revising program cost and schedule baselines. Until agencies expand and enforce their EVM policies, it will be difficult for them to gain the full benefits of EVM. GAO's analysis of 16 investments shows that agencies are using EVM to manage their system acquisitions; however, the extent of implementation varies. Specifically, for 13 of the 16 investments, key practices necessary for sound EVM execution had not been implemented. For example, the project schedules for these investments contained issues--such as the improper sequencing of key activities--that undermine the quality of their performance baselines. This inconsistent application of EVM exists in part because of the weaknesses contained in agencies' policies, combined with a lack of enforcement of policies already in place. Until key EVM practices are fully implemented, these investments face an increased risk that managers cannot effectively optimize EVM as a management tool. Furthermore, earned value data trends of these investments indicate that most are currently experiencing shortfalls against cost and schedule targets. The total life-cycle costs of these programs have increased by about $2 billion. Based on GAO's analysis of current performance trends, 11 programs will likely incur cost overruns that will total about $1 billion at contract completion--in particular, 2 of these programs account for about 80 percent of this projection. As such, GAO estimates the total cost overrun to be about $3 billion at program completion (see figure). However, with timely and effective management action, it is possible to reverse negative trends so that the projected cost overruns may be reduced. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
There are four major steps in the contract-level RADV audit process as reported by CMS: MA contract selection. CMS selects 30 MA organization contracts for contract-level RADV audits, which agency officials stated provides a sufficient representation of contracts (about 5 percent) without imposing unreasonable costs on the agency. An MA organization may have more than one contract selected for a contract-level RADV audit. CMS selects contracts based on diagnosis coding intensity, which the agency defines for each contract as the average change in the risk score component specifically associated with the reported diagnoses for the beneficiaries covered by the contract. That is, increases in coding intensity measure the extent to which the estimated medical needs of the beneficiaries in a contract increase from year to year; thus, contracts whose beneficiaries appear to be getting “sicker” at a relatively rapid rate, based on the information submitted to CMS, will have relatively high coding intensity scores. Contracts with the highest increases in coding intensity are those with beneficiaries whose reported diagnoses increased in severity at the fastest rates. CMS officials stated that the agency adopted this selection methodology to (1) focus the contract-level RADV audits on MA organization contracts that might be more likely to have submitted diagnoses that are not supported by the medical records and (2) provide additional oversight of contracts with the most aggressive coding. To be eligible for a contract-level audit, MA contracts must have had at least three pair-years of data that can be used to distinguish a change in disease risk scores from one year to the next; that is, the contract must have been in place for at least 4 years of continuous payment activity plus the audit year. For each pair year, CMS’s coding intensity calculation excludes beneficiaries not enrolled in the same contract or not eligible for Medicare in consecutive years. CMS ranks contracts by coding intensity and divides them into three categories: high, medium, and low. CMS then randomly selects contracts for audit: 20 from the high category, 5 from the medium category, and 5 from the low category. According to CMS officials, this strategy ensures contracts with the highest coding intensity— considered high risk for improper payments by CMS—have a higher probability for audit while keeping all contracts at risk for review. MA beneficiary sampling. After CMS selects 30 MA contracts to audit, the agency selects the beneficiaries whose medical records will be the focus of review. Up to 201 beneficiaries are chosen from each contract based on the individuals’ risk scores using a stratified random sample: 67 beneficiaries from each of the three risk score groups (highest one-third of risk scores, the middle one-third, and the lowest third). Medical record collection and review. After selecting beneficiaries for review, CMS requests supporting medical record documentation for all diagnoses submitted to adjust risk in the payment year. The MA organization may submit up to five medical records per audited diagnosis. CMS contractors review the submitted medical records to determine if the records support the diagnoses submitted by the MA organizations. If the initial reviewer determines that a diagnosis is not supported, a second reviewer reviews the case. Payment error calculation and extrapolation. When medical record review is completed, CMS extrapolates a payment error rate to the entire contract beginning with contract-level audits of 2011 payments. Each beneficiary’s payment error is multiplied by a sampling weight and the number of months the beneficiary was enrolled in the MA contract during the payment year. After these beneficiary-level payment errors are summed, the amount CMS will seek to recover will be reduced by (1) using the lower limit of a 99 percent confidence interval based on the sample and (2) reducing the recovery amount by a FFS adjuster amount that estimates payment errors that would have likely occurred in FFS claims data. Once the recovery amount is finalized, CMS releases contract-level RADV audit finding reports to each audited MA organization, which may dispute the results of medical record review or appeal the audit findings. Beginning with the contract-level RADV audits of 2011 payments, CMS will collect extrapolated overpayments from MA organizations once all appeals are final. Recovery auditors have been used in various industries, including health care, to identify and collect overpayments. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 directed CMS to test the use of RACs to identify overpayments and underpayments through a postpayment review of FFS medical claims and recoup overpayments. The Tax Relief and Health Care Act of 2006 required CMS to implement a permanent national recovery audit contractor program by January 1, 2010 and to compensate RACs using a contingency fee structure under which the RACs are paid from recovered overpayments. The Patient Protection and Affordable Care Act expanded the recovery audit program initiated in Medicare FFS to MA plans under Part C, among other things. In future contract-level RADV audits, CMS also will review diagnoses submitted through MA encounter data. While CMS previously collected diagnoses from MA organizations, in 2012 the agency also began collecting encounter data from MA organizations similar to that submitted on FFS claims. CMS requires MA organizations to submit, via the Encounter Data System, encounter data weekly, biweekly, or monthly depending on their number of enrollees. Encounter data include diagnosis and treatment information recorded by providers for all medical services and may either originate from claims that providers submit to MA organizations for payment or from MA organizations’ medical record review. CMS started including the diagnosis information from MA encounter data from 2014 dates of service when calculating 2015 enrollee risk scores. While coding intensity scores can be helpful in assessing the likelihood of improper payments for MA contracts, results from the CMS contract-level RADV audits of 2007 payments indicate that the coding intensity scores CMS calculated were not strongly correlated with the percentage of unsupported diagnoses within a contract. The fact that this correlation is not strong reduces the likelihood that contracts selected for audit would be those most likely to yield large amounts of improper payments and hampers CMS’s goal of using the audits to recover improper payments. In addition, internal control standards for federal agencies state that agencies should use and communicate quality information in achieving program goals. Figure 1 shows, for example, that CMS reported that the percentage of unsupported diagnoses (36.0 percent) among the high coding intensity contracts it audited was nearly identical to the percentage of unsupported diagnoses (35.7 percent) among the medium coding intensity contracts audited. In addition, 7 contracts in the high coding intensity group had unsupported diagnosis rates below 30 percent, including the contract with the highest coding intensity score. Several shortcomings in CMS’s methods for calculating coding intensity could have weakened the correlation between the degree of coding intensity and the percentage of improper payments. These shortcomings and their potential effects are as follows. CMS’s coding intensity calculation may be based on noncomparable coding intensity scores across contracts because (1) the years of data used for each contract may not be the same and (2) coding intensity scores are not standardized to control for year-to-year differences. First, although CMS officials stated that the agency requires at least three pair- years of data for each contract, the agency includes data from all available years for each contract, which may vary between contracts. Because the growth in risk scores was lower in the MA program in earlier years among beneficiaries that continuously enrolled in the program, CMS’s inconsistent standard of years measured for each contract would tend to calculate higher coding intensity scores for contracts that entered the MA market during periods of higher risk score growth. Among beneficiaries who enrolled in MA in consecutive years, the growth in average risk scores was 0.106 from 2004 through 2006, 0.119 from 2006 through 2010, and 0.132 from 2010 through 2013. Second, CMS officials stated that the agency does not standardize its coding intensity data relative to a measure of central tendency. Because CMS’s coding intensity calculation does not account for the expected increase in risk scores during each period of growth, changes in risk scores may be more volatile from year to year than they would likely be if standardized or indexed to a measure of central tendency. CMS’s coding intensity calculation does not distinguish between the diagnoses that were likely coded by providers and the diagnoses that were likely revised by MA organizations. MA organizations may receive diagnoses from providers that are related to services rendered to MA beneficiaries. Because these diagnoses are submitted by providers, the medical records they create may be more likely to support these diagnoses compared with diagnoses that are subsequently coded by the MA organization through medical record chart reviews. For future years, CMS has an available method to distinguish between diagnoses likely submitted by providers to MA organizations and diagnoses that were likely later added by MA organizations. CMS’s Encounter Data System provides a way for MA organizations to designate supplemental diagnoses that the organization added or revised after conducting medical record review. CMS has not outlined plans for incorporating encounter data into its contract selection methodology, even though the encounter data could help target the submitted diagnoses that may be most likely related to improper payments in the future. CMS follows contracts that are renewed or consolidated under a different existing contract within the same MA organization; however, the agency’s coding intensity calculation does not include the prior risk scores of the prior contract in the MA organization’s renewed contract. This may result in overestimated improper payment risk if MA organizations move beneficiaries with higher risk scores—such as those with special needs— into one consolidated contract. CMS’s contract selection methodology did not (1) always target contracts with the highest coding intensity scores, (2) use results from prior contract-level RADV audits, (3) account for contract consolidation, and (4) account for contracts with high enrollment. These shortcomings are impediments to CMS’s goal of recovering improper payments and are counter to federal internal control standards, which require that agencies use quality information to achieve their program goals. For the 2011 contract-level RADV audits, CMS used a contract selection methodology that did not focus on contracts with the highest coding intensity scores. While we found that coding intensity scores are not strongly correlated with diagnostic discrepancies, they are somewhat correlated. CMS failed to fully consider that correlation for the 2011 contract-level RADV audit. For that audit, CMS officials stated that 20 of the 30 contracts were chosen because they were among the top third of all contracts in coding intensity, but we found that many of the 20 contracts were not at the highest risk for improper payments according to CMS’s estimate of coding intensity. Only 4 of the 20 contracts ranked among the highest 10 percent in coding intensity, while 8 of the 20 contracts ranked below the 75th percentile in the coding intensity distribution (see fig. 2). In addition, CMS chose 5 of the 30 contracts because they were among the bottom third of all contracts in coding intensity, even though CMS’s contract-level RADV audits of 2007 payments found that all contracts in the lowest third of the agency’s coding intensity calculation had a below-average percentage of unsupported diagnoses. CMS officials stated that the RADV contract selection methodology includes these contracts to show that all contracts are at risk of being audited. However, officials also stated that MA organizations are not informed of their contracts’ coding intensity relative to all other MA contracts; thus, MA organizations cannot be certain their contracts will not be audited even if CMS announced it will no longer audit low coding intensity contracts. According to agency officials, CMS’s 2011 contract-level RADV contract selection methodology also did not consider results from the agency’s prior RADV audits, potentially overlooking information indicating contracts with known improper payment risk. Thus, contracts with the highest rates of unsupported diagnoses in the 2007 contract-level RADV audits were not among those selected for 2011 contract-level RADV audits. While CMS selected 6 contracts for 2011 that also underwent 2007 contract- level RADV audits, only 1 of these contracts was among the 10 with the highest rates of unsupported diagnoses in 2007. For the 2011 contract- level RADV audits, CMS officials stated that the agency selected 6 MA contracts because the HHS Office of Inspector General had conducted audits of 2007 payments on those contracts, but CMS did not know the rates of unsupported diagnoses for those contracts and did not determine which of them were at high risk of improper payments. By not considering results from prior contract-level RADV audits, CMS’s contract selection methodology also did not account for contract consolidation. An MA organization may have more than one contract in a service area; further, it may no longer have a contract that underwent a prior RADV audit but continue to operate another contract within the same service area. For example, the contract with the highest rate of unsupported diagnoses in the 2007 contract-level RADV audit is no longer in place, but the MA organization continues to operate a different contract that includes the service area from its prior contract. Thus, without considering all of an MA organization’s contracts in that service area, CMS cannot audit the beneficiaries affiliated with the highest percentage of unsupported diagnoses in 2007. Although the potential dollar amount of improper payments to MA organizations with high rates of unsupported diagnoses is likely greater when contract enrollment is large, CMS officials stated that the 2011 contract-level RADV contract selection methodology did not account for contracts with high enrollment. In 2011, the median enrollment among MA contracts was about 5,000, while enrollment at the 90th percentile was nearly 45,000. Some MA contracts with large enrollment had high rates of unsupported diagnoses under prior contract-level RADV audits. For example, 5 of the 10 MA contracts with the highest rates of unsupported diagnoses for the 2007 contract-level RADV audits had 2011 enrollment above the 90th percentile. CMS officials reported that current contract-level RADV audits have been ongoing for several years, including the appeals associated with the 2007 contract-level RADV audits. (See fig. 3.) For audits of 2007 payments, CMS notified MA organizations in November 2008 that their contracts would be audited but did not complete medical record review until approximately 4-1/2 years later in March 2013. Similarly, 2011 contract- level RADV audits had not been completed as of August 2015. CMS notified MA organizations of contract audit selection in November 2013 but did not begin medical record review for these contracts until May 2015. CMS officials said the agency will start collecting payments from the 2011 contract-level RADV audits in fiscal year 2016. As the agency is in the medical record review phase, appeals have not yet started. This slow progress in completing audits is contrary to CMS’s goal to conduct contract-level RADV audits on an annual basis and slows its recovery of improper payments. In addition, CMS lacks a timetable that would help the agency to complete these contract-level audits on an annual cycle. In contrast, the national RADV audit that calculates the national improper payment estimate uses a timetable, but this is not applied to the contract-level audits. The national RADV audits that CMS annually conducts to estimate the national MA improper payment rate under IPIA provide the agency with a possible timetable for completing annual contract-level RADV audits. CMS has not followed established project management principles in this regard, which call for developing an overall plan to meet strategic goals and to complete projects in a timely manner. In addition to the lack of a timetable, other factors have lengthened the time frame of the contract-level audit process. First, CMS’s sequential notification to MA organizations—first identifying which contracts had been selected for audit and then later identifying which beneficiaries under these contracts would be audited—hinders the agency’s goal of conducting annual contract-level audits because it creates a time gap. For example, for the 2011 contract-level audits, CMS officials stated that the agency notified MA organizations about the beneficiaries whose diagnoses would be audited 3 months after notifying these same MA organizations about which contracts had been selected for audit. Both the selection of contracts and beneficiaries currently require risk score and beneficiary enrollment data. Second, ongoing performance issues with the web-based system CMS uses to receive medical records submitted by MA organizations for contract-level RADV audits caused CMS to substantially lengthen the time frame for MA organizations to submit these medical records for the 2011 contract-level RADV audits. According to CMS officials, for the 2007 contract-level RADV audits, MA organizations submitted medical records for 98 percent of all audited diagnoses within a 16-week time frame. However, system performance issues with the Central Data Abstraction Tool (CDAT)—CMS’s web-based system for transferring and receiving contract-level RADV audit data—led CMS to more than triple the medical record submission time frame for the 2011 contract-level RADV audits to over 1 year. Officials from AHIP and the two MA organizations we interviewed indicated that CDAT often proved inoperable, with significant delays and errors in uploading files. CMS officials stated that the agency suspended the use of CDAT for 8 months and implemented steps to monitor and test CDAT’s performance. CMS officials stated that they have implemented steps to continue monitoring and testing CDAT’s performance. However, officials from MA organizations stated that CDAT continued to experience significant delays in uploading files after CMS reopened CDAT for use. Officials of one MA organization suspected that the system may have been overwhelmed because CMS increased the number of medical records allowed per audited diagnosis from one to five between the 2007 and 2011 contract-level audits. For future medical record submissions, CMS officials subsequently told us that they plan to use a 20-week submission period and did not indicate to us any plans for an additional medical record submission method if CDAT’s problems persisted. CMS’s Medicare FFS program has increasingly used the Electronic Submission of Medical Documentation System (ESMD) to transfer medical records reliably from providers to Medicare contractors since 2011. Both ESMD and CDAT allow for the electronic submission of medical records by securely uploading and submitting medical record documentation in a portable document format file. CMS officials stated that the agency did not use ESMD to transfer medical records primarily because it could not also be used for medical record review like CDAT. However, medical records could be reviewed without being transferred through CDAT. The transfer of medical records has been the main source of delay in completing CMS’s contract-level audits of 2011 payments, and CMS has not assessed the feasibility of updating ESMD for transferring medical records in contract-level RADV audits. While ESMD was not available when CMS began its 2007 contract-level RADV audits, the system has demonstrated a greater capacity for transferring medical records than CDAT. In fiscal year 2014, providers used ESMD to transfer nearly 500,000 medical records—far beyond the capacity necessary for contract-level RADV audits. In interviews, officials of two FFS RACs stated that ESMD was very reliable and did not have technical issues that affected audits. In addition, CMS has not applied time limits to contract-level RADV reviewers for completing medical record reviews. These reviews took 3 years for the 2007 contract-level RADV audits. In contrast, CMS generally requires its Medicare Administrative Contractors (MAC)—a type of FFS contractor—to make postpayment audit determinations within 60 days of receiving medical record documentation. Because CMS has not required that contract-level RADV auditors complete medical record reviews within a specific time period, the agency is hindering its ability to reach its goal of conducting annual contract-level RADV audits. Disputes and appeals stemming from the 2007 contract-level RADV audit findings have been ongoing for years and the lack of time frames at the first level of the appeal process hinders CMS from achieving its goal of using contract-level audits to recoup improper payments. Nearly all MA organizations whose contracts were included in the 2007 contract-level RADV audit cycle disputed at least one diagnosis finding following medical record review, and five MA organizations disputed all the findings of unsupported diagnoses. CMS officials stated that MA organizations in total disputed 624 (4.3 percent) of the 14,388 audited diagnoses, and that the determinations on these disputes, which were submitted starting March 2013 through May 2013, were not complete until July 2014. If an MA organization disagrees with the medical record dispute determination, the MA organization may appeal to a hearing officer. This appeal level is called review by a CMS hearing officer. Because the medical record dispute process for the 2007 contract-level RADV audit cycle took nearly 1-1/2 years to complete, CMS officials stated that the agency did not receive all 2007 second-level appeal requests for hearing officer review until August 2014. CMS officials stated that the hearing officer adjudicated or received a withdrawal request from the MA organization for 377 of the 624 appeals (60 percent) from August 2014 through September 2015. Appeals for the 2011 contract-level RADV audit cycle have yet to begin, as CMS officials stated that the agency is currently in the process of reviewing medical records submitted by MA organizations for the 2011 contract-level RADV audits. CMS officials stated that the medical record dispute process for the 2011 contract-level RADV audit cycle will differ from the process used during the 2007 cycle in certain respects. In particular, for the 2011 RADV audit cycle, the medical record dispute process will be incorporated into the appeal process instead of being part of the audit process, as it was during the 2007 cycle. The new first-level appeal process, in which an MA organization can submit a written request for an independent reevaluation of the RADV audit decision, will be called the reconsideration stage. This change will allow MA organizations to request reconsideration of medical record review determinations simultaneously with the appeal of payment error calculations, rather than sequentially, as was the case during the 2007 contract-level RADV audit cycle. While such a change may be helpful, the new process does not establish time limits for when reconsideration decisions must be issued. In contrast, CMS generally imposes a 60-day time limit on MA organization decisions regarding beneficiary payment first-level appeals in MA. CMS measures the timeliness of decisions regarding MA beneficiary first-level appeals to assist the agency in assigning quality performance ratings and bonus payments to MA organizations. Similarly in Medicare FFS, officials generally must issue decisions within 60 days of receiving first-level appeal requests. CMS officials stated that due to the agency’s limited experience with the contract-level RADV audit process, time limits were not imposed at the reconsideration appeal level and that this issue may be revisited once CMS completes a full contract-level RADV audit cycle. The lack of explicit time frames for appeal decisions at the reconsideration level hinders CMS’s collection of improper payments as the agency cannot recover extrapolated overpayments until the MA organization exhausts all levels of appeal and is inconsistent with established project management principles. CMS has not expanded the RAC program to MA, as it was required to do by the end of 2010 by the Patient Protection and Affordable Care Act. CMS issued a request for industry comment regarding implementation of the MA RAC on December 27, 2010, seeking stakeholder input regarding potential ways improper payments could be identified in MA using RACs. CMS reported that it had received all stakeholder comments from this request by late February 2011. CMS issued a request for proposals for the MA RAC in July 2014. As defined by the Statement of Work in that request, the MA RAC would audit improper payments in the audit areas of Medicare secondary payer, end-stage renal disease, and hospice. In October 2014, CMS officials told us that the agency did not receive any proposals to conduct the work in those three audit areas and that CMS’s goal was to reissue the MA RAC solicitation in 2015. In November 2015, CMS officials told us that the agency is no longer considering Medicare secondary payer, end-stage renal disease, and hospice services as audit areas for the MA RAC. Instead, the officials told us that CMS was exploring whether and how an MA RAC could assist CMS with contract-level RADV audits. In December 2015, CMS issued a request for information seeking industry comment regarding how an MA RAC could be incorporated into CMS’s existing contract-level RADV audit framework. In the request document, CMS stated that it is seeking an MA RAC to help the agency expand the number of MA contracts subject to audit each year. In the request, CMS stated that its ultimate goal is to have all MA contracts subject to either a contract-level RADV audit or what it termed a condition-specific RADV audit for each payment year. Officials we interviewed from three of the current Medicare FFS RACs all acknowledged that their organizations had the capacity and willingness to conduct contract-level RADV audits. Despite its recent request for information, CMS does not have specific plans or a timetable for including RACs in the contract-level RADV audit process. Established project management principles call for developing an overall plan and monitoring framework to meet strategic goals. A plan and timetable would help guide CMS’s efforts in incorporating a RAC in MA and help hold the agency accountable for implementing this requirement from the Patient Protection and Affordable Care Act. Once the requirement is implemented, CMS could leverage the MA RAC in order to increase the number of MA organization contracts audited. CMS’s recovery of improper payments has been restricted because it has not established an MA RAC. For example, CMS currently plans to include 30 MA contracts in contract-level RADV audits for each payment year, about 5 percent of all contracts. Limitations in CMS’s processes for selecting contracts for audit, in the timeliness of CMS’s audit and appeal processes, and in the agency’s plans for using MA RACs to assist in identifying improper payments hinder the accomplishment of its contract-level RADV audit goals: to conduct annual contract-level audits and recover improper payments. These limitations are also inconsistent with federal internal control standards and established project management principles. Our analyses of these processes and plans suggest that CMS will likely recover a small portion of the billions of dollars in MA improper payments that occur every year. Shortcomings in CMS’s MA contract selection methodology may result in audits that are not focused on the contracts most likely to be disproportionately responsible for improper payments. Furthermore, CMS’s RADV time frames are so long that they may hamper the agency’s efforts to conduct audits annually, collect extrapolated payments efficiently, and use audit results to inform future RADV contract selection. By CMS’s own estimates, conducting annual contract-level audits would potentially allow CMS to recover hundreds of millions of dollars more in improper payments each year. Agency officials have expressed concerns about the intensive agency resources required to conduct contract-level RADV audits. To address the resource requirements of conducting contract-level audits, CMS intends to leverage the MA RACs for this purpose; however, the agency has not outlined how it plans to incorporate RACs into the contract-level RADV audits and is in the early stages of soliciting industry comment regarding how to do so. As CMS continues to implement and refine the contract-level RADV audit process, we recommend that the Administrator of CMS take actions in the following five key areas to improve the efficiency and effectiveness of reducing and recovering improper payments. First, to improve the accuracy of CMS’s calculation of coding intensity, the Administrator should modify that calculation by taking actions such as the following: including only the three most recent pair-years of risk score data for all contracts; standardizing the changes in disease risk scores to account for the expected increase in risk scores for all MA contracts; developing a method of accounting for diagnostic errors not coded by providers, such as requiring that diagnoses added by MA organizations be flagged as supplemental diagnoses in the agency’s Encounter Data System to separately calculate coding intensity scores related only to diagnoses that were added through MA organizations’ supplemental record review (that is, were not coded by providers); and including MA beneficiaries enrolled in contracts that were renewed from a different contract under the same MA organization during the pair-year period. Second, the Administrator should modify CMS’s selection of contracts for contract-level RADV audits to focus on those contracts most likely to have high rates of improper payments by taking actions such as the following: excluding contracts with low coding intensity scores; selecting more contracts with the highest coding intensity scores; selecting contracts with high rates of unsupported diagnoses in prior contract-level RADV audits; if a contract with a high rate of unsupported diagnoses is no longer in operation, selecting a contract under the same MA organization that includes the service area of the prior contract; and selecting some contracts with high enrollment that also have either high rates of unsupported diagnoses in prior contract-level RADV audits or high coding intensity scores. Third, the Administrator should enhance the timeliness of CMS’s contract- level RADV process by taking actions such as the following: closely aligning the time frames in CMS’s contract-level RADV audits with those of the national RADV audits the agency uses to estimate the MA improper payment rate; reducing the time between notifying MA organizations of contract audit selection and notifying them about the beneficiaries and diagnoses that will be audited; improving the reliability and performance of the agency’s process for transferring medical records from MA organizations, including assessing the feasibility of updating ESMD for use in transferring medical records in contract-level RADV audits; and requiring that CMS contract-level RADV auditors complete their medical record reviews within a specific number of days comparable to other medical record review time frames in the Medicare program. Fourth, the Administrator should improve the timeliness of CMS’s contract-level RADV appeal process by requiring that reconsideration decisions be rendered within a specified number of days comparable to other medical record review and first-level appeal time frames in the Medicare program. Fifth, the Administrator should ensure that CMS develops specific plans and a timetable for incorporating a RAC in the MA program as mandated by the Patient Protection and Affordable Care Act. We provided a draft of this report to HHS for comment. HHS provided written comments, which are printed in appendix I. HHS concurred with our recommendations. In its comment letter, HHS also reaffirmed its commitment to identifying and correcting improper payments in the MA program. HHS also provided technical comments, which we incorporated as appropriate. Based on HHS’s technical comments, we revised our suggested actions for how HHS could meet GAO’s first recommendation. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. James Cosgrove, (202) 512-7114 or [email protected]. In addition to the contact named above, individuals making key contributions to this report include Martin T. Gahart, Assistant Director; Luis Serna III; and Marisa Beatley. Elizabeth T. Morrison and Jennifer Whitworth also provided valuable assistance. | In 2014, Medicare paid about $160 billion to MA organizations to provide health care services for approximately 16 million beneficiaries. CMS, which administers Medicare, estimates that about 9.5 percent of its payments to MA organizations were improper, according to the most recent data—primarily stemming from unsupported diagnoses submitted by MA organizations. CMS currently uses RADV audits to recover improper payments in the MA program. GAO was asked to review the extent to which CMS is addressing improper payments in the MA program. This report examines the extent to which (1) CMS's contract selection methodology for RADV audits facilitates the recovery of improper payments, (2) CMS has completed RADV audits and appeals in a timely manner, and (3) CMS has made progress toward incorporating RACs into the MA program to identify and assist with improper payment recovery. In addition to reviewing research literature and agency documents, GAO analyzed data from ongoing RADV audits of 2007 and 2011 payments—CMS's two initial contract-level RADV audits. GAO also interviewed CMS officials. Medicare Advantage (MA) organizations contract with the Centers for Medicare & Medicaid Services (CMS) to offer beneficiaries a private plan alternative to the original program and are paid a predetermined monthly amount by Medicare for each enrolled beneficiary. These payments are risk adjusted to reflect each enrolled beneficiary's health status and projected spending for Medicare-covered services. CMS conducts risk adjustment data validation (RADV) audits of MA contracts which facilitate the recovery of improper payments from MA organizations that submitted beneficiary diagnoses for payment adjustment purposes that were unsupported by medical records. With a separate national audit, CMS estimated that it improperly paid $14.1 billion in 2013 to MA organizations, primarily because of these unsupported diagnoses. GAO found that CMS's methodology does not result in the selection of contracts for audit that have the greatest potential for recovery of improper payments. First, CMS's estimate of improper payment risk for each contract, which is based on the diagnoses reported for the beneficiaries in that contract, is not strongly correlated with unsupported diagnoses. Second, CMS does not use other available information to select the contracts at the highest risk of improper payments. As a result, 4 of the 30 contracts CMS selected for its RADV audit of 2011 payments were among the 10 percent of contracts estimated by CMS to be at the highest risk for improper payments. These limitations are impediments to CMS's goal of recovering improper payments and do not align with federal internal control standards, which require that agencies use quality information to achieve their program goals. CMS's goal of eventually conducting annual RADV audits is in jeopardy because its two RADV audits to date have experienced substantial delays in identifying and recovering improper payments. RADV audits of 2007 and 2011 payments have taken multiple years and are still ongoing for several reasons. First, CMS's RADV audits rely on a system for transferring medical records from MA organizations that has often been inoperable. Second, CMS audit procedures have lacked specified time requirements for completing medical record reviews and for other steps in the RADV audit process. In addition, CMS has not established timeframes for appeal decisions at the first-level of the MA appeal process, as it has done in other contexts. CMS has not expanded the recovery audit program to MA by the end of 2010, as it was required to do by the Patient Protection and Affordable Care Act. RACs have been used in other Medicare programs to recover improper payments for a contingency fee. In December 2015, CMS issued a request for information seeking industry comment on how an MA RAC could be incorporated into the RADV audit framework. CMS noted in its request that incorporating a RAC into the RADV framework would increase the number of MA contracts audited each year. CMS currently includes 30 MA contracts in each RADV audit, about 5 percent of all MA contracts. Despite the importance of increasing the number of contracts audited, CMS does not have specific plans or a timetable for incorporating RACs into the RADV audit framework, contrary to established project management principles, which stress the importance of developing an overall plan to meet strategic goals. GAO is making five recommendations to CMS to improve its processes for selecting contracts to include in the RADV audits, enhance the timeliness of the audits, and incorporate RACs into the RADV audits. HHS concurred with the recommendations. |
Subsets and Splits